00:00:00.001 Started by upstream project "autotest-per-patch" build number 127157 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.085 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.096 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.108 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:05.108 > git config core.sparsecheckout # timeout=10 00:00:05.121 > git read-tree -mu HEAD # timeout=10 00:00:05.137 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:05.157 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:05.157 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:05.248 [Pipeline] Start of Pipeline 00:00:05.260 [Pipeline] library 00:00:05.261 Loading library shm_lib@master 00:00:05.261 Library shm_lib@master is cached. Copying from home. 00:00:05.278 [Pipeline] node 00:00:05.286 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.289 [Pipeline] { 00:00:05.301 [Pipeline] catchError 00:00:05.302 [Pipeline] { 00:00:05.319 [Pipeline] wrap 00:00:05.330 [Pipeline] { 00:00:05.339 [Pipeline] stage 00:00:05.341 [Pipeline] { (Prologue) 00:00:05.550 [Pipeline] sh 00:00:05.832 + logger -p user.info -t JENKINS-CI 00:00:05.850 [Pipeline] echo 00:00:05.852 Node: WFP16 00:00:05.858 [Pipeline] sh 00:00:06.151 [Pipeline] setCustomBuildProperty 00:00:06.160 [Pipeline] echo 00:00:06.162 Cleanup processes 00:00:06.166 [Pipeline] sh 00:00:06.446 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.446 3815564 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.460 [Pipeline] sh 00:00:06.742 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.742 ++ grep -v 'sudo pgrep' 00:00:06.742 ++ awk '{print $1}' 00:00:06.742 + sudo kill -9 00:00:06.742 + true 00:00:06.755 [Pipeline] cleanWs 00:00:06.765 [WS-CLEANUP] Deleting project workspace... 00:00:06.765 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.771 [WS-CLEANUP] done 00:00:06.774 [Pipeline] setCustomBuildProperty 00:00:06.789 [Pipeline] sh 00:00:07.066 + sudo git config --global --replace-all safe.directory '*' 00:00:07.134 [Pipeline] httpRequest 00:00:07.153 [Pipeline] echo 00:00:07.154 Sorcerer 10.211.164.101 is alive 00:00:07.160 [Pipeline] httpRequest 00:00:07.164 HttpMethod: GET 00:00:07.164 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.165 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.183 Response Code: HTTP/1.1 200 OK 00:00:07.183 Success: Status code 200 is in the accepted range: 200,404 00:00:07.184 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:28.383 [Pipeline] sh 00:00:28.664 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:28.678 [Pipeline] httpRequest 00:00:28.704 [Pipeline] echo 00:00:28.705 Sorcerer 10.211.164.101 is alive 00:00:28.712 [Pipeline] httpRequest 00:00:28.716 HttpMethod: GET 00:00:28.717 URL: http://10.211.164.101/packages/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:28.717 Sending request to url: http://10.211.164.101/packages/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:28.724 Response Code: HTTP/1.1 200 OK 00:00:28.724 Success: Status code 200 is in the accepted range: 200,404 00:00:28.725 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:01:14.617 [Pipeline] sh 00:01:14.901 + tar --no-same-owner -xf spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:01:23.042 [Pipeline] sh 00:01:23.326 + git -C spdk log --oneline -n5 00:01:23.326 86fd5638b autotest: reduce RAID tests runs 00:01:23.326 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:23.326 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:23.326 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:23.326 d005e023b raid: fix empty slot not updated in sb after resize 00:01:23.337 [Pipeline] } 00:01:23.354 [Pipeline] // stage 00:01:23.362 [Pipeline] stage 00:01:23.364 [Pipeline] { (Prepare) 00:01:23.381 [Pipeline] writeFile 00:01:23.397 [Pipeline] sh 00:01:23.679 + logger -p user.info -t JENKINS-CI 00:01:23.693 [Pipeline] sh 00:01:23.980 + logger -p user.info -t JENKINS-CI 00:01:23.992 [Pipeline] sh 00:01:24.273 + cat autorun-spdk.conf 00:01:24.273 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.273 SPDK_TEST_NVMF=1 00:01:24.273 SPDK_TEST_NVME_CLI=1 00:01:24.273 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.273 SPDK_TEST_NVMF_NICS=e810 00:01:24.273 SPDK_TEST_VFIOUSER=1 00:01:24.273 SPDK_RUN_UBSAN=1 00:01:24.273 NET_TYPE=phy 00:01:24.281 RUN_NIGHTLY=0 00:01:24.285 [Pipeline] readFile 00:01:24.309 [Pipeline] withEnv 00:01:24.311 [Pipeline] { 00:01:24.324 [Pipeline] sh 00:01:24.608 + set -ex 00:01:24.608 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:24.608 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.608 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.608 ++ SPDK_TEST_NVMF=1 00:01:24.608 ++ SPDK_TEST_NVME_CLI=1 00:01:24.608 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.608 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.608 ++ SPDK_TEST_VFIOUSER=1 00:01:24.608 ++ SPDK_RUN_UBSAN=1 00:01:24.608 ++ NET_TYPE=phy 00:01:24.608 ++ RUN_NIGHTLY=0 00:01:24.608 + case $SPDK_TEST_NVMF_NICS in 00:01:24.608 + DRIVERS=ice 00:01:24.608 + [[ tcp == \r\d\m\a ]] 00:01:24.608 + [[ -n ice ]] 00:01:24.608 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.608 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.608 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:24.608 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.608 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.608 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.608 + true 00:01:24.608 + for D in $DRIVERS 00:01:24.608 + sudo modprobe ice 00:01:24.608 + exit 0 00:01:24.617 [Pipeline] } 00:01:24.635 [Pipeline] // withEnv 00:01:24.640 [Pipeline] } 00:01:24.657 [Pipeline] // stage 00:01:24.664 [Pipeline] catchError 00:01:24.665 [Pipeline] { 00:01:24.673 [Pipeline] timeout 00:01:24.673 Timeout set to expire in 50 min 00:01:24.674 [Pipeline] { 00:01:24.686 [Pipeline] stage 00:01:24.688 [Pipeline] { (Tests) 00:01:24.701 [Pipeline] sh 00:01:24.985 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.985 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.985 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.985 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:24.985 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.985 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.985 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:24.985 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.985 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.985 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.985 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:24.985 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.985 + source /etc/os-release 00:01:24.985 ++ NAME='Fedora Linux' 00:01:24.985 ++ VERSION='38 (Cloud Edition)' 00:01:24.985 ++ ID=fedora 00:01:24.985 ++ VERSION_ID=38 00:01:24.985 ++ VERSION_CODENAME= 00:01:24.985 ++ PLATFORM_ID=platform:f38 00:01:24.985 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:24.985 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.985 ++ LOGO=fedora-logo-icon 00:01:24.985 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:24.985 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.985 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:24.985 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.985 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.985 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.985 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:24.985 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.985 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:24.985 ++ SUPPORT_END=2024-05-14 00:01:24.985 ++ VARIANT='Cloud Edition' 00:01:24.985 ++ VARIANT_ID=cloud 00:01:24.985 + uname -a 00:01:24.985 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.985 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:27.562 Hugepages 00:01:27.562 node hugesize free / total 00:01:27.562 node0 1048576kB 0 / 0 00:01:27.562 node0 2048kB 0 / 0 00:01:27.562 node1 1048576kB 0 / 0 00:01:27.562 node1 2048kB 0 / 0 00:01:27.562 00:01:27.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.562 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:27.562 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:27.562 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.562 + rm -f /tmp/spdk-ld-path 00:01:27.562 + source autorun-spdk.conf 00:01:27.562 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.562 ++ SPDK_TEST_NVMF=1 00:01:27.562 ++ SPDK_TEST_NVME_CLI=1 00:01:27.562 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.562 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.562 ++ SPDK_TEST_VFIOUSER=1 00:01:27.562 ++ SPDK_RUN_UBSAN=1 00:01:27.562 ++ NET_TYPE=phy 00:01:27.562 ++ RUN_NIGHTLY=0 00:01:27.562 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.562 + [[ -n '' ]] 00:01:27.562 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.562 + for M in /var/spdk/build-*-manifest.txt 00:01:27.562 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.562 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.562 + for M in /var/spdk/build-*-manifest.txt 00:01:27.562 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.562 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.562 ++ uname 00:01:27.562 + [[ Linux == \L\i\n\u\x ]] 00:01:27.562 + sudo dmesg -T 00:01:27.562 + sudo dmesg --clear 00:01:27.823 + dmesg_pid=3816630 00:01:27.823 + [[ Fedora Linux == FreeBSD ]] 00:01:27.823 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.823 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.823 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.823 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:27.823 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:27.823 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.823 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.823 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.823 + sudo dmesg -Tw 00:01:27.823 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.823 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.823 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.823 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.823 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.823 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.823 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.823 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.823 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.823 Test configuration: 00:01:27.823 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.823 SPDK_TEST_NVMF=1 00:01:27.823 SPDK_TEST_NVME_CLI=1 00:01:27.823 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.823 SPDK_TEST_NVMF_NICS=e810 00:01:27.823 SPDK_TEST_VFIOUSER=1 00:01:27.823 SPDK_RUN_UBSAN=1 00:01:27.823 NET_TYPE=phy 00:01:27.823 RUN_NIGHTLY=0 11:48:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:27.823 11:48:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.823 11:48:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.823 11:48:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.823 11:48:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.823 11:48:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.823 11:48:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.823 11:48:04 -- paths/export.sh@5 -- $ export PATH 00:01:27.823 11:48:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.823 11:48:04 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:27.823 11:48:04 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:27.823 11:48:04 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721900884.XXXXXX 00:01:27.823 11:48:04 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721900884.0leyct 00:01:27.823 11:48:04 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:27.823 11:48:04 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:27.823 11:48:04 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:27.823 11:48:04 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.824 11:48:04 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.824 11:48:04 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:27.824 11:48:04 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:27.824 11:48:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.824 11:48:05 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:27.824 11:48:05 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:27.824 11:48:05 -- pm/common@17 -- $ local monitor 00:01:27.824 11:48:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.824 11:48:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.824 11:48:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.824 11:48:05 -- pm/common@21 -- $ date +%s 00:01:27.824 11:48:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.824 11:48:05 -- pm/common@21 -- $ date +%s 00:01:27.824 11:48:05 -- pm/common@25 -- $ sleep 1 00:01:27.824 11:48:05 -- pm/common@21 -- $ date +%s 00:01:27.824 11:48:05 -- pm/common@21 -- $ date +%s 00:01:27.824 11:48:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900885 00:01:27.824 11:48:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900885 00:01:27.824 11:48:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900885 00:01:27.824 11:48:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900885 00:01:27.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900885_collect-vmstat.pm.log 00:01:27.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900885_collect-cpu-temp.pm.log 00:01:27.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900885_collect-cpu-load.pm.log 00:01:27.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900885_collect-bmc-pm.bmc.pm.log 00:01:28.764 11:48:06 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:28.764 11:48:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.764 11:48:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.764 11:48:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.764 11:48:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.764 Thu Jul 25 09:48:06 AM UTC 2024 00:01:28.764 11:48:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.764 v24.09-pre-322-g86fd5638b 00:01:28.764 11:48:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.764 11:48:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.764 11:48:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.764 11:48:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:28.764 11:48:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:28.764 11:48:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.024 ************************************ 00:01:29.024 START TEST ubsan 00:01:29.024 ************************************ 00:01:29.024 11:48:06 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:29.024 using ubsan 00:01:29.024 00:01:29.024 real 0m0.000s 00:01:29.024 user 0m0.000s 00:01:29.025 sys 0m0.000s 00:01:29.025 11:48:06 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:29.025 11:48:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.025 ************************************ 00:01:29.025 END TEST ubsan 00:01:29.025 ************************************ 00:01:29.025 11:48:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.025 11:48:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.025 11:48:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.025 11:48:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.025 11:48:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.025 11:48:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.025 11:48:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.025 11:48:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.025 11:48:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:29.025 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:29.025 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:29.595 Using 'verbs' RDMA provider 00:01:42.384 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:57.331 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:57.331 Creating mk/config.mk...done. 00:01:57.331 Creating mk/cc.flags.mk...done. 00:01:57.331 Type 'make' to build. 00:01:57.331 11:48:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:57.331 11:48:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.331 11:48:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.331 11:48:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.331 ************************************ 00:01:57.331 START TEST make 00:01:57.331 ************************************ 00:01:57.331 11:48:32 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:57.331 make[1]: Nothing to be done for 'all'. 00:01:57.897 The Meson build system 00:01:57.897 Version: 1.3.1 00:01:57.897 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:57.897 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.897 Build type: native build 00:01:57.897 Project name: libvfio-user 00:01:57.897 Project version: 0.0.1 00:01:57.897 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:57.897 C linker for the host machine: cc ld.bfd 2.39-16 00:01:57.897 Host machine cpu family: x86_64 00:01:57.897 Host machine cpu: x86_64 00:01:57.897 Run-time dependency threads found: YES 00:01:57.897 Library dl found: YES 00:01:57.897 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:57.897 Run-time dependency json-c found: YES 0.17 00:01:57.897 Run-time dependency cmocka found: YES 1.1.7 00:01:57.897 Program pytest-3 found: NO 00:01:57.897 Program flake8 found: NO 00:01:57.897 Program misspell-fixer found: NO 00:01:57.897 Program restructuredtext-lint found: NO 00:01:57.897 Program valgrind found: YES (/usr/bin/valgrind) 00:01:57.897 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.897 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.897 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.897 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:57.897 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:57.897 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:57.897 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:57.897 Build targets in project: 8 00:01:57.897 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:57.897 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:57.897 00:01:57.897 libvfio-user 0.0.1 00:01:57.897 00:01:57.897 User defined options 00:01:57.897 buildtype : debug 00:01:57.897 default_library: shared 00:01:57.897 libdir : /usr/local/lib 00:01:57.897 00:01:57.897 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.155 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:58.413 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:58.413 [2/37] Compiling C object samples/null.p/null.c.o 00:01:58.413 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:58.413 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:58.413 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:58.413 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:58.413 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:58.413 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:58.413 [9/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:58.413 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:58.413 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:58.413 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:58.413 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:58.413 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:58.413 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:58.413 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:58.413 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:58.413 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:58.413 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:58.413 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:58.413 [21/37] Compiling C object samples/server.p/server.c.o 00:01:58.413 [22/37] Compiling C object samples/client.p/client.c.o 00:01:58.413 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:58.413 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:58.413 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:58.413 [26/37] Linking target samples/client 00:01:58.413 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:58.672 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:58.672 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:58.672 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:58.672 [31/37] Linking target test/unit_tests 00:01:58.672 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:58.931 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:58.931 [34/37] Linking target samples/null 00:01:58.931 [35/37] Linking target samples/server 00:01:58.931 [36/37] Linking target samples/lspci 00:01:58.931 [37/37] Linking target samples/gpio-pci-idio-16 00:01:58.931 INFO: autodetecting backend as ninja 00:01:58.931 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.931 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:59.190 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:59.190 ninja: no work to do. 00:02:05.760 The Meson build system 00:02:05.760 Version: 1.3.1 00:02:05.760 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:05.760 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:05.760 Build type: native build 00:02:05.760 Program cat found: YES (/usr/bin/cat) 00:02:05.760 Project name: DPDK 00:02:05.760 Project version: 24.03.0 00:02:05.760 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:05.760 C linker for the host machine: cc ld.bfd 2.39-16 00:02:05.760 Host machine cpu family: x86_64 00:02:05.760 Host machine cpu: x86_64 00:02:05.760 Message: ## Building in Developer Mode ## 00:02:05.760 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.760 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.760 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.760 Program python3 found: YES (/usr/bin/python3) 00:02:05.760 Program cat found: YES (/usr/bin/cat) 00:02:05.760 Compiler for C supports arguments -march=native: YES 00:02:05.760 Checking for size of "void *" : 8 00:02:05.760 Checking for size of "void *" : 8 (cached) 00:02:05.760 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:05.760 Library m found: YES 00:02:05.760 Library numa found: YES 00:02:05.760 Has header "numaif.h" : YES 00:02:05.760 Library fdt found: NO 00:02:05.760 Library execinfo found: NO 00:02:05.760 Has header "execinfo.h" : YES 00:02:05.760 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:05.760 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.760 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.760 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.760 Run-time dependency openssl found: YES 3.0.9 00:02:05.760 Run-time dependency libpcap found: YES 1.10.4 00:02:05.760 Has header "pcap.h" with dependency libpcap: YES 00:02:05.760 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.760 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.760 Compiler for C supports arguments -Wformat: YES 00:02:05.760 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.760 Compiler for C supports arguments -Wformat-security: NO 00:02:05.760 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.760 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.760 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.760 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.760 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.760 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.760 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.760 Compiler for C supports arguments -Wundef: YES 00:02:05.760 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.760 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.760 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.760 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.760 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.760 Program objdump found: YES (/usr/bin/objdump) 00:02:05.760 Compiler for C supports arguments -mavx512f: YES 00:02:05.760 Checking if "AVX512 checking" compiles: YES 00:02:05.760 Fetching value of define "__SSE4_2__" : 1 00:02:05.760 Fetching value of define "__AES__" : 1 00:02:05.760 Fetching value of define "__AVX__" : 1 00:02:05.760 Fetching value of define "__AVX2__" : 1 00:02:05.760 Fetching value of define "__AVX512BW__" : 1 00:02:05.760 Fetching value of define "__AVX512CD__" : 1 00:02:05.760 Fetching value of define "__AVX512DQ__" : 1 00:02:05.760 Fetching value of define "__AVX512F__" : 1 00:02:05.760 Fetching value of define "__AVX512VL__" : 1 00:02:05.760 Fetching value of define "__PCLMUL__" : 1 00:02:05.760 Fetching value of define "__RDRND__" : 1 00:02:05.760 Fetching value of define "__RDSEED__" : 1 00:02:05.760 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.760 Fetching value of define "__znver1__" : (undefined) 00:02:05.760 Fetching value of define "__znver2__" : (undefined) 00:02:05.760 Fetching value of define "__znver3__" : (undefined) 00:02:05.760 Fetching value of define "__znver4__" : (undefined) 00:02:05.760 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.760 Message: lib/log: Defining dependency "log" 00:02:05.760 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.760 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.760 Checking for function "getentropy" : NO 00:02:05.760 Message: lib/eal: Defining dependency "eal" 00:02:05.760 Message: lib/ring: Defining dependency "ring" 00:02:05.760 Message: lib/rcu: Defining dependency "rcu" 00:02:05.760 Message: lib/mempool: Defining dependency "mempool" 00:02:05.760 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.760 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.760 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.760 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.760 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.760 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.760 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.760 Compiler for C supports arguments -mpclmul: YES 00:02:05.760 Compiler for C supports arguments -maes: YES 00:02:05.761 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.761 Compiler for C supports arguments -mavx512bw: YES 00:02:05.761 Compiler for C supports arguments -mavx512dq: YES 00:02:05.761 Compiler for C supports arguments -mavx512vl: YES 00:02:05.761 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.761 Compiler for C supports arguments -mavx2: YES 00:02:05.761 Compiler for C supports arguments -mavx: YES 00:02:05.761 Message: lib/net: Defining dependency "net" 00:02:05.761 Message: lib/meter: Defining dependency "meter" 00:02:05.761 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.761 Message: lib/pci: Defining dependency "pci" 00:02:05.761 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.761 Message: lib/hash: Defining dependency "hash" 00:02:05.761 Message: lib/timer: Defining dependency "timer" 00:02:05.761 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.761 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.761 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.761 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.761 Message: lib/power: Defining dependency "power" 00:02:05.761 Message: lib/reorder: Defining dependency "reorder" 00:02:05.761 Message: lib/security: Defining dependency "security" 00:02:05.761 Has header "linux/userfaultfd.h" : YES 00:02:05.761 Has header "linux/vduse.h" : YES 00:02:05.761 Message: lib/vhost: Defining dependency "vhost" 00:02:05.761 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.761 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.761 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.761 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.761 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.761 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.761 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.761 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.761 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.761 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.761 Program doxygen found: YES (/usr/bin/doxygen) 00:02:05.761 Configuring doxy-api-html.conf using configuration 00:02:05.761 Configuring doxy-api-man.conf using configuration 00:02:05.761 Program mandb found: YES (/usr/bin/mandb) 00:02:05.761 Program sphinx-build found: NO 00:02:05.761 Configuring rte_build_config.h using configuration 00:02:05.761 Message: 00:02:05.761 ================= 00:02:05.761 Applications Enabled 00:02:05.761 ================= 00:02:05.761 00:02:05.761 apps: 00:02:05.761 00:02:05.761 00:02:05.761 Message: 00:02:05.761 ================= 00:02:05.761 Libraries Enabled 00:02:05.761 ================= 00:02:05.761 00:02:05.761 libs: 00:02:05.761 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.761 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.761 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.761 00:02:05.761 Message: 00:02:05.761 =============== 00:02:05.761 Drivers Enabled 00:02:05.761 =============== 00:02:05.761 00:02:05.761 common: 00:02:05.761 00:02:05.761 bus: 00:02:05.761 pci, vdev, 00:02:05.761 mempool: 00:02:05.761 ring, 00:02:05.761 dma: 00:02:05.761 00:02:05.761 net: 00:02:05.761 00:02:05.761 crypto: 00:02:05.761 00:02:05.761 compress: 00:02:05.761 00:02:05.761 vdpa: 00:02:05.761 00:02:05.761 00:02:05.761 Message: 00:02:05.761 ================= 00:02:05.761 Content Skipped 00:02:05.761 ================= 00:02:05.761 00:02:05.761 apps: 00:02:05.761 dumpcap: explicitly disabled via build config 00:02:05.761 graph: explicitly disabled via build config 00:02:05.761 pdump: explicitly disabled via build config 00:02:05.761 proc-info: explicitly disabled via build config 00:02:05.761 test-acl: explicitly disabled via build config 00:02:05.761 test-bbdev: explicitly disabled via build config 00:02:05.761 test-cmdline: explicitly disabled via build config 00:02:05.761 test-compress-perf: explicitly disabled via build config 00:02:05.761 test-crypto-perf: explicitly disabled via build config 00:02:05.761 test-dma-perf: explicitly disabled via build config 00:02:05.761 test-eventdev: explicitly disabled via build config 00:02:05.761 test-fib: explicitly disabled via build config 00:02:05.761 test-flow-perf: explicitly disabled via build config 00:02:05.761 test-gpudev: explicitly disabled via build config 00:02:05.761 test-mldev: explicitly disabled via build config 00:02:05.761 test-pipeline: explicitly disabled via build config 00:02:05.761 test-pmd: explicitly disabled via build config 00:02:05.761 test-regex: explicitly disabled via build config 00:02:05.761 test-sad: explicitly disabled via build config 00:02:05.761 test-security-perf: explicitly disabled via build config 00:02:05.761 00:02:05.761 libs: 00:02:05.761 argparse: explicitly disabled via build config 00:02:05.761 metrics: explicitly disabled via build config 00:02:05.761 acl: explicitly disabled via build config 00:02:05.761 bbdev: explicitly disabled via build config 00:02:05.761 bitratestats: explicitly disabled via build config 00:02:05.761 bpf: explicitly disabled via build config 00:02:05.761 cfgfile: explicitly disabled via build config 00:02:05.761 distributor: explicitly disabled via build config 00:02:05.761 efd: explicitly disabled via build config 00:02:05.761 eventdev: explicitly disabled via build config 00:02:05.761 dispatcher: explicitly disabled via build config 00:02:05.761 gpudev: explicitly disabled via build config 00:02:05.761 gro: explicitly disabled via build config 00:02:05.761 gso: explicitly disabled via build config 00:02:05.761 ip_frag: explicitly disabled via build config 00:02:05.761 jobstats: explicitly disabled via build config 00:02:05.761 latencystats: explicitly disabled via build config 00:02:05.761 lpm: explicitly disabled via build config 00:02:05.761 member: explicitly disabled via build config 00:02:05.761 pcapng: explicitly disabled via build config 00:02:05.761 rawdev: explicitly disabled via build config 00:02:05.761 regexdev: explicitly disabled via build config 00:02:05.761 mldev: explicitly disabled via build config 00:02:05.761 rib: explicitly disabled via build config 00:02:05.761 sched: explicitly disabled via build config 00:02:05.761 stack: explicitly disabled via build config 00:02:05.761 ipsec: explicitly disabled via build config 00:02:05.761 pdcp: explicitly disabled via build config 00:02:05.761 fib: explicitly disabled via build config 00:02:05.761 port: explicitly disabled via build config 00:02:05.761 pdump: explicitly disabled via build config 00:02:05.761 table: explicitly disabled via build config 00:02:05.761 pipeline: explicitly disabled via build config 00:02:05.761 graph: explicitly disabled via build config 00:02:05.761 node: explicitly disabled via build config 00:02:05.761 00:02:05.761 drivers: 00:02:05.761 common/cpt: not in enabled drivers build config 00:02:05.761 common/dpaax: not in enabled drivers build config 00:02:05.761 common/iavf: not in enabled drivers build config 00:02:05.761 common/idpf: not in enabled drivers build config 00:02:05.761 common/ionic: not in enabled drivers build config 00:02:05.761 common/mvep: not in enabled drivers build config 00:02:05.761 common/octeontx: not in enabled drivers build config 00:02:05.761 bus/auxiliary: not in enabled drivers build config 00:02:05.761 bus/cdx: not in enabled drivers build config 00:02:05.761 bus/dpaa: not in enabled drivers build config 00:02:05.761 bus/fslmc: not in enabled drivers build config 00:02:05.761 bus/ifpga: not in enabled drivers build config 00:02:05.761 bus/platform: not in enabled drivers build config 00:02:05.761 bus/uacce: not in enabled drivers build config 00:02:05.761 bus/vmbus: not in enabled drivers build config 00:02:05.761 common/cnxk: not in enabled drivers build config 00:02:05.761 common/mlx5: not in enabled drivers build config 00:02:05.761 common/nfp: not in enabled drivers build config 00:02:05.761 common/nitrox: not in enabled drivers build config 00:02:05.761 common/qat: not in enabled drivers build config 00:02:05.761 common/sfc_efx: not in enabled drivers build config 00:02:05.761 mempool/bucket: not in enabled drivers build config 00:02:05.761 mempool/cnxk: not in enabled drivers build config 00:02:05.761 mempool/dpaa: not in enabled drivers build config 00:02:05.761 mempool/dpaa2: not in enabled drivers build config 00:02:05.761 mempool/octeontx: not in enabled drivers build config 00:02:05.761 mempool/stack: not in enabled drivers build config 00:02:05.761 dma/cnxk: not in enabled drivers build config 00:02:05.761 dma/dpaa: not in enabled drivers build config 00:02:05.761 dma/dpaa2: not in enabled drivers build config 00:02:05.761 dma/hisilicon: not in enabled drivers build config 00:02:05.761 dma/idxd: not in enabled drivers build config 00:02:05.761 dma/ioat: not in enabled drivers build config 00:02:05.761 dma/skeleton: not in enabled drivers build config 00:02:05.761 net/af_packet: not in enabled drivers build config 00:02:05.761 net/af_xdp: not in enabled drivers build config 00:02:05.761 net/ark: not in enabled drivers build config 00:02:05.761 net/atlantic: not in enabled drivers build config 00:02:05.761 net/avp: not in enabled drivers build config 00:02:05.761 net/axgbe: not in enabled drivers build config 00:02:05.761 net/bnx2x: not in enabled drivers build config 00:02:05.761 net/bnxt: not in enabled drivers build config 00:02:05.761 net/bonding: not in enabled drivers build config 00:02:05.761 net/cnxk: not in enabled drivers build config 00:02:05.761 net/cpfl: not in enabled drivers build config 00:02:05.761 net/cxgbe: not in enabled drivers build config 00:02:05.761 net/dpaa: not in enabled drivers build config 00:02:05.761 net/dpaa2: not in enabled drivers build config 00:02:05.761 net/e1000: not in enabled drivers build config 00:02:05.761 net/ena: not in enabled drivers build config 00:02:05.761 net/enetc: not in enabled drivers build config 00:02:05.761 net/enetfec: not in enabled drivers build config 00:02:05.762 net/enic: not in enabled drivers build config 00:02:05.762 net/failsafe: not in enabled drivers build config 00:02:05.762 net/fm10k: not in enabled drivers build config 00:02:05.762 net/gve: not in enabled drivers build config 00:02:05.762 net/hinic: not in enabled drivers build config 00:02:05.762 net/hns3: not in enabled drivers build config 00:02:05.762 net/i40e: not in enabled drivers build config 00:02:05.762 net/iavf: not in enabled drivers build config 00:02:05.762 net/ice: not in enabled drivers build config 00:02:05.762 net/idpf: not in enabled drivers build config 00:02:05.762 net/igc: not in enabled drivers build config 00:02:05.762 net/ionic: not in enabled drivers build config 00:02:05.762 net/ipn3ke: not in enabled drivers build config 00:02:05.762 net/ixgbe: not in enabled drivers build config 00:02:05.762 net/mana: not in enabled drivers build config 00:02:05.762 net/memif: not in enabled drivers build config 00:02:05.762 net/mlx4: not in enabled drivers build config 00:02:05.762 net/mlx5: not in enabled drivers build config 00:02:05.762 net/mvneta: not in enabled drivers build config 00:02:05.762 net/mvpp2: not in enabled drivers build config 00:02:05.762 net/netvsc: not in enabled drivers build config 00:02:05.762 net/nfb: not in enabled drivers build config 00:02:05.762 net/nfp: not in enabled drivers build config 00:02:05.762 net/ngbe: not in enabled drivers build config 00:02:05.762 net/null: not in enabled drivers build config 00:02:05.762 net/octeontx: not in enabled drivers build config 00:02:05.762 net/octeon_ep: not in enabled drivers build config 00:02:05.762 net/pcap: not in enabled drivers build config 00:02:05.762 net/pfe: not in enabled drivers build config 00:02:05.762 net/qede: not in enabled drivers build config 00:02:05.762 net/ring: not in enabled drivers build config 00:02:05.762 net/sfc: not in enabled drivers build config 00:02:05.762 net/softnic: not in enabled drivers build config 00:02:05.762 net/tap: not in enabled drivers build config 00:02:05.762 net/thunderx: not in enabled drivers build config 00:02:05.762 net/txgbe: not in enabled drivers build config 00:02:05.762 net/vdev_netvsc: not in enabled drivers build config 00:02:05.762 net/vhost: not in enabled drivers build config 00:02:05.762 net/virtio: not in enabled drivers build config 00:02:05.762 net/vmxnet3: not in enabled drivers build config 00:02:05.762 raw/*: missing internal dependency, "rawdev" 00:02:05.762 crypto/armv8: not in enabled drivers build config 00:02:05.762 crypto/bcmfs: not in enabled drivers build config 00:02:05.762 crypto/caam_jr: not in enabled drivers build config 00:02:05.762 crypto/ccp: not in enabled drivers build config 00:02:05.762 crypto/cnxk: not in enabled drivers build config 00:02:05.762 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.762 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.762 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.762 crypto/mlx5: not in enabled drivers build config 00:02:05.762 crypto/mvsam: not in enabled drivers build config 00:02:05.762 crypto/nitrox: not in enabled drivers build config 00:02:05.762 crypto/null: not in enabled drivers build config 00:02:05.762 crypto/octeontx: not in enabled drivers build config 00:02:05.762 crypto/openssl: not in enabled drivers build config 00:02:05.762 crypto/scheduler: not in enabled drivers build config 00:02:05.762 crypto/uadk: not in enabled drivers build config 00:02:05.762 crypto/virtio: not in enabled drivers build config 00:02:05.762 compress/isal: not in enabled drivers build config 00:02:05.762 compress/mlx5: not in enabled drivers build config 00:02:05.762 compress/nitrox: not in enabled drivers build config 00:02:05.762 compress/octeontx: not in enabled drivers build config 00:02:05.762 compress/zlib: not in enabled drivers build config 00:02:05.762 regex/*: missing internal dependency, "regexdev" 00:02:05.762 ml/*: missing internal dependency, "mldev" 00:02:05.762 vdpa/ifc: not in enabled drivers build config 00:02:05.762 vdpa/mlx5: not in enabled drivers build config 00:02:05.762 vdpa/nfp: not in enabled drivers build config 00:02:05.762 vdpa/sfc: not in enabled drivers build config 00:02:05.762 event/*: missing internal dependency, "eventdev" 00:02:05.762 baseband/*: missing internal dependency, "bbdev" 00:02:05.762 gpu/*: missing internal dependency, "gpudev" 00:02:05.762 00:02:05.762 00:02:05.762 Build targets in project: 85 00:02:05.762 00:02:05.762 DPDK 24.03.0 00:02:05.762 00:02:05.762 User defined options 00:02:05.762 buildtype : debug 00:02:05.762 default_library : shared 00:02:05.762 libdir : lib 00:02:05.762 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:05.762 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.762 c_link_args : 00:02:05.762 cpu_instruction_set: native 00:02:05.762 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:05.762 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:05.762 enable_docs : false 00:02:05.762 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.762 enable_kmods : false 00:02:05.762 max_lcores : 128 00:02:05.762 tests : false 00:02:05.762 00:02:05.762 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.021 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:06.287 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.287 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:06.287 [3/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.287 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.287 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.287 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.287 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.287 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.287 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:06.287 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.287 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.287 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.287 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.549 [14/268] Linking static target lib/librte_kvargs.a 00:02:06.549 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.549 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.549 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.549 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.549 [19/268] Linking static target lib/librte_log.a 00:02:06.549 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.549 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.549 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.549 [23/268] Linking static target lib/librte_pci.a 00:02:06.549 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.549 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.549 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.549 [27/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.549 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.549 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.549 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.549 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.811 [32/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.811 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:06.811 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.811 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.811 [36/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.811 [37/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.811 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.811 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.811 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.811 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.811 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.811 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.811 [44/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.069 [45/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.070 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.070 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.070 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.070 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.070 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.070 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:07.070 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.070 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.070 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.070 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.070 [56/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.070 [57/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.070 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.070 [59/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.070 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.070 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.070 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.070 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.070 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:07.070 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.070 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.070 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.070 [68/268] Linking static target lib/librte_meter.a 00:02:07.070 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.070 [70/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.070 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:07.070 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.070 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:07.070 [74/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.070 [75/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:07.070 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.070 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:07.070 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.070 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.070 [80/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.070 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:07.070 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.070 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:07.070 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:07.070 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.070 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.070 [87/268] Linking static target lib/librte_telemetry.a 00:02:07.070 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:07.070 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.070 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.070 [91/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.070 [92/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.070 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.070 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.070 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.070 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:07.070 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.070 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.070 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.070 [100/268] Linking static target lib/librte_mempool.a 00:02:07.070 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.070 [102/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.070 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.070 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.070 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.070 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:07.070 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.070 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.070 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.070 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.070 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.070 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:07.070 [113/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.070 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.070 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:07.070 [116/268] Linking static target lib/librte_cmdline.a 00:02:07.070 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.070 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.070 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.070 [120/268] Linking static target lib/librte_net.a 00:02:07.070 [121/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.326 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.326 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.326 [124/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.326 [125/268] Linking static target lib/librte_rcu.a 00:02:07.326 [126/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.326 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.326 [128/268] Linking static target lib/librte_timer.a 00:02:07.326 [129/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.326 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.326 [131/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.326 [132/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.326 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.326 [134/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.326 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.326 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.326 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.326 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:07.326 [139/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.326 [140/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.326 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.326 [142/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.326 [143/268] Linking static target lib/librte_dmadev.a 00:02:07.326 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.326 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.326 [146/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.326 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.326 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.326 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.326 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.326 [151/268] Linking static target lib/librte_compressdev.a 00:02:07.326 [152/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:07.326 [153/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.326 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.326 [155/268] Linking static target lib/librte_eal.a 00:02:07.584 [156/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.584 [157/268] Linking static target lib/librte_mbuf.a 00:02:07.584 [158/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.584 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.584 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.584 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:07.584 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.584 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.584 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.584 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.584 [166/268] Linking target lib/librte_log.so.24.1 00:02:07.584 [167/268] Linking static target lib/librte_power.a 00:02:07.584 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.584 [169/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.584 [170/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.584 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.584 [172/268] Linking static target lib/librte_reorder.a 00:02:07.584 [173/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.584 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.584 [175/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.584 [176/268] Linking static target lib/librte_ring.a 00:02:07.584 [177/268] Linking static target lib/librte_hash.a 00:02:07.584 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.584 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.584 [180/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.584 [181/268] Linking static target lib/librte_security.a 00:02:07.584 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:07.584 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.584 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:07.584 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.584 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.584 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.584 [188/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:07.584 [189/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.843 [190/268] Linking target lib/librte_kvargs.so.24.1 00:02:07.843 [191/268] Linking target lib/librte_telemetry.so.24.1 00:02:07.843 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:07.843 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:07.843 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:07.843 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.843 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.843 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.843 [198/268] Linking static target drivers/librte_bus_vdev.a 00:02:07.843 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.844 [200/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:07.844 [201/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:07.844 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.844 [203/268] Linking static target lib/librte_cryptodev.a 00:02:07.844 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:07.844 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.844 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.844 [207/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.844 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.844 [209/268] Linking static target drivers/librte_bus_pci.a 00:02:08.102 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.102 [211/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.102 [212/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.102 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.102 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.102 [215/268] Linking static target drivers/librte_mempool_ring.a 00:02:08.102 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.102 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.102 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.361 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.361 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.361 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.361 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.361 [223/268] Linking static target lib/librte_ethdev.a 00:02:08.361 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.620 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.620 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.620 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.015 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.015 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.015 [230/268] Linking static target lib/librte_vhost.a 00:02:11.923 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.198 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.146 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.146 [234/268] Linking target lib/librte_eal.so.24.1 00:02:18.407 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:18.407 [236/268] Linking target lib/librte_ring.so.24.1 00:02:18.407 [237/268] Linking target lib/librte_timer.so.24.1 00:02:18.407 [238/268] Linking target lib/librte_pci.so.24.1 00:02:18.407 [239/268] Linking target lib/librte_meter.so.24.1 00:02:18.407 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:18.407 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:18.407 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:18.407 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:18.407 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:18.407 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:18.407 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:18.407 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:18.407 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:18.407 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:18.666 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:18.666 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.925 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.925 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:18.925 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.925 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:18.925 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:19.184 [257/268] Linking target lib/librte_net.so.24.1 00:02:19.184 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:19.184 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:19.444 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:19.444 [261/268] Linking target lib/librte_hash.so.24.1 00:02:19.444 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:19.444 [263/268] Linking target lib/librte_security.so.24.1 00:02:19.444 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:19.444 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:19.702 [266/268] Linking target lib/librte_power.so.24.1 00:02:19.702 [267/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:19.702 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:19.702 INFO: autodetecting backend as ninja 00:02:19.702 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:21.079 CC lib/log/log.o 00:02:21.079 CC lib/log/log_flags.o 00:02:21.079 CC lib/ut/ut.o 00:02:21.079 CC lib/log/log_deprecated.o 00:02:21.079 CC lib/ut_mock/mock.o 00:02:21.079 LIB libspdk_ut.a 00:02:21.079 LIB libspdk_log.a 00:02:21.079 LIB libspdk_ut_mock.a 00:02:21.079 SO libspdk_ut.so.2.0 00:02:21.079 SO libspdk_ut_mock.so.6.0 00:02:21.079 SO libspdk_log.so.7.0 00:02:21.079 SYMLINK libspdk_ut.so 00:02:21.338 SYMLINK libspdk_ut_mock.so 00:02:21.338 SYMLINK libspdk_log.so 00:02:21.596 CC lib/util/base64.o 00:02:21.596 CC lib/util/bit_array.o 00:02:21.596 CC lib/util/crc16.o 00:02:21.596 CC lib/util/cpuset.o 00:02:21.596 CC lib/ioat/ioat.o 00:02:21.596 CC lib/util/crc32.o 00:02:21.596 CC lib/util/crc32c.o 00:02:21.596 CXX lib/trace_parser/trace.o 00:02:21.596 CC lib/util/crc32_ieee.o 00:02:21.596 CC lib/util/crc64.o 00:02:21.596 CC lib/dma/dma.o 00:02:21.596 CC lib/util/dif.o 00:02:21.596 CC lib/util/fd.o 00:02:21.596 CC lib/util/fd_group.o 00:02:21.596 CC lib/util/file.o 00:02:21.596 CC lib/util/hexlify.o 00:02:21.596 CC lib/util/iov.o 00:02:21.596 CC lib/util/net.o 00:02:21.596 CC lib/util/math.o 00:02:21.596 CC lib/util/pipe.o 00:02:21.596 CC lib/util/string.o 00:02:21.596 CC lib/util/strerror_tls.o 00:02:21.596 CC lib/util/uuid.o 00:02:21.596 CC lib/util/xor.o 00:02:21.596 CC lib/util/zipf.o 00:02:21.855 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.855 CC lib/vfio_user/host/vfio_user.o 00:02:21.855 LIB libspdk_dma.a 00:02:21.855 SO libspdk_dma.so.4.0 00:02:21.855 SYMLINK libspdk_dma.so 00:02:22.113 LIB libspdk_vfio_user.a 00:02:22.113 SO libspdk_vfio_user.so.5.0 00:02:22.113 LIB libspdk_ioat.a 00:02:22.113 SYMLINK libspdk_vfio_user.so 00:02:22.113 SO libspdk_ioat.so.7.0 00:02:22.113 LIB libspdk_util.a 00:02:22.113 SYMLINK libspdk_ioat.so 00:02:22.371 SO libspdk_util.so.10.0 00:02:22.371 SYMLINK libspdk_util.so 00:02:22.630 LIB libspdk_trace_parser.a 00:02:22.630 SO libspdk_trace_parser.so.5.0 00:02:22.630 SYMLINK libspdk_trace_parser.so 00:02:22.630 CC lib/json/json_parse.o 00:02:22.630 CC lib/idxd/idxd.o 00:02:22.630 CC lib/json/json_util.o 00:02:22.630 CC lib/json/json_write.o 00:02:22.630 CC lib/idxd/idxd_kernel.o 00:02:22.630 CC lib/idxd/idxd_user.o 00:02:22.630 CC lib/rdma_utils/rdma_utils.o 00:02:22.630 CC lib/conf/conf.o 00:02:22.630 CC lib/env_dpdk/env.o 00:02:22.630 CC lib/rdma_provider/common.o 00:02:22.630 CC lib/env_dpdk/memory.o 00:02:22.630 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:22.630 CC lib/vmd/vmd.o 00:02:22.630 CC lib/env_dpdk/pci.o 00:02:22.630 CC lib/env_dpdk/init.o 00:02:22.630 CC lib/vmd/led.o 00:02:22.630 CC lib/env_dpdk/threads.o 00:02:22.630 CC lib/env_dpdk/pci_ioat.o 00:02:22.630 CC lib/env_dpdk/pci_virtio.o 00:02:22.630 CC lib/env_dpdk/pci_vmd.o 00:02:22.630 CC lib/env_dpdk/pci_idxd.o 00:02:22.891 CC lib/env_dpdk/pci_event.o 00:02:22.891 CC lib/env_dpdk/sigbus_handler.o 00:02:22.891 CC lib/env_dpdk/pci_dpdk.o 00:02:22.891 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.891 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:22.891 LIB libspdk_rdma_provider.a 00:02:23.149 LIB libspdk_conf.a 00:02:23.149 SO libspdk_rdma_provider.so.6.0 00:02:23.149 SO libspdk_conf.so.6.0 00:02:23.150 LIB libspdk_rdma_utils.a 00:02:23.150 LIB libspdk_json.a 00:02:23.150 SYMLINK libspdk_rdma_provider.so 00:02:23.150 SO libspdk_rdma_utils.so.1.0 00:02:23.150 SO libspdk_json.so.6.0 00:02:23.150 SYMLINK libspdk_conf.so 00:02:23.150 SYMLINK libspdk_rdma_utils.so 00:02:23.150 SYMLINK libspdk_json.so 00:02:23.409 LIB libspdk_idxd.a 00:02:23.409 SO libspdk_idxd.so.12.0 00:02:23.409 LIB libspdk_vmd.a 00:02:23.409 SYMLINK libspdk_idxd.so 00:02:23.409 SO libspdk_vmd.so.6.0 00:02:23.409 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.409 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.409 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.409 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.667 SYMLINK libspdk_vmd.so 00:02:23.667 LIB libspdk_env_dpdk.a 00:02:23.667 LIB libspdk_jsonrpc.a 00:02:23.667 SO libspdk_env_dpdk.so.15.0 00:02:23.925 SO libspdk_jsonrpc.so.6.0 00:02:23.925 SYMLINK libspdk_jsonrpc.so 00:02:23.925 SYMLINK libspdk_env_dpdk.so 00:02:24.185 CC lib/rpc/rpc.o 00:02:24.445 LIB libspdk_rpc.a 00:02:24.445 SO libspdk_rpc.so.6.0 00:02:24.445 SYMLINK libspdk_rpc.so 00:02:25.014 CC lib/notify/notify.o 00:02:25.014 CC lib/trace/trace.o 00:02:25.014 CC lib/notify/notify_rpc.o 00:02:25.014 CC lib/trace/trace_flags.o 00:02:25.014 CC lib/trace/trace_rpc.o 00:02:25.014 CC lib/keyring/keyring.o 00:02:25.014 CC lib/keyring/keyring_rpc.o 00:02:25.014 LIB libspdk_notify.a 00:02:25.014 SO libspdk_notify.so.6.0 00:02:25.014 LIB libspdk_keyring.a 00:02:25.014 LIB libspdk_trace.a 00:02:25.273 SYMLINK libspdk_notify.so 00:02:25.273 SO libspdk_keyring.so.1.0 00:02:25.273 SO libspdk_trace.so.10.0 00:02:25.273 SYMLINK libspdk_keyring.so 00:02:25.273 SYMLINK libspdk_trace.so 00:02:25.532 CC lib/sock/sock.o 00:02:25.532 CC lib/sock/sock_rpc.o 00:02:25.532 CC lib/thread/iobuf.o 00:02:25.532 CC lib/thread/thread.o 00:02:26.101 LIB libspdk_sock.a 00:02:26.101 SO libspdk_sock.so.10.0 00:02:26.101 SYMLINK libspdk_sock.so 00:02:26.359 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.359 CC lib/nvme/nvme_ctrlr.o 00:02:26.359 CC lib/nvme/nvme_fabric.o 00:02:26.359 CC lib/nvme/nvme_ns_cmd.o 00:02:26.359 CC lib/nvme/nvme_pcie_common.o 00:02:26.359 CC lib/nvme/nvme_ns.o 00:02:26.359 CC lib/nvme/nvme_pcie.o 00:02:26.359 CC lib/nvme/nvme_qpair.o 00:02:26.359 CC lib/nvme/nvme_quirks.o 00:02:26.359 CC lib/nvme/nvme.o 00:02:26.359 CC lib/nvme/nvme_transport.o 00:02:26.359 CC lib/nvme/nvme_discovery.o 00:02:26.359 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.359 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.359 CC lib/nvme/nvme_tcp.o 00:02:26.359 CC lib/nvme/nvme_opal.o 00:02:26.359 CC lib/nvme/nvme_io_msg.o 00:02:26.359 CC lib/nvme/nvme_poll_group.o 00:02:26.359 CC lib/nvme/nvme_zns.o 00:02:26.359 CC lib/nvme/nvme_stubs.o 00:02:26.359 CC lib/nvme/nvme_auth.o 00:02:26.359 CC lib/nvme/nvme_cuse.o 00:02:26.359 CC lib/nvme/nvme_vfio_user.o 00:02:26.359 CC lib/nvme/nvme_rdma.o 00:02:27.292 LIB libspdk_thread.a 00:02:27.292 SO libspdk_thread.so.10.1 00:02:27.292 SYMLINK libspdk_thread.so 00:02:27.550 CC lib/accel/accel.o 00:02:27.550 CC lib/accel/accel_rpc.o 00:02:27.550 CC lib/accel/accel_sw.o 00:02:27.550 CC lib/init/json_config.o 00:02:27.550 CC lib/init/subsystem.o 00:02:27.550 CC lib/init/subsystem_rpc.o 00:02:27.550 CC lib/vfu_tgt/tgt_endpoint.o 00:02:27.550 CC lib/vfu_tgt/tgt_rpc.o 00:02:27.550 CC lib/init/rpc.o 00:02:27.550 CC lib/virtio/virtio.o 00:02:27.550 CC lib/virtio/virtio_vhost_user.o 00:02:27.550 CC lib/virtio/virtio_vfio_user.o 00:02:27.550 CC lib/virtio/virtio_pci.o 00:02:27.550 CC lib/blob/blobstore.o 00:02:27.550 CC lib/blob/request.o 00:02:27.550 CC lib/blob/zeroes.o 00:02:27.550 CC lib/blob/blob_bs_dev.o 00:02:27.809 LIB libspdk_vfu_tgt.a 00:02:27.809 LIB libspdk_init.a 00:02:27.809 SO libspdk_vfu_tgt.so.3.0 00:02:27.809 SO libspdk_init.so.5.0 00:02:27.809 LIB libspdk_virtio.a 00:02:27.809 SYMLINK libspdk_vfu_tgt.so 00:02:27.809 SO libspdk_virtio.so.7.0 00:02:27.809 SYMLINK libspdk_init.so 00:02:28.067 SYMLINK libspdk_virtio.so 00:02:28.067 CC lib/event/app.o 00:02:28.067 CC lib/event/reactor.o 00:02:28.325 CC lib/event/log_rpc.o 00:02:28.325 CC lib/event/app_rpc.o 00:02:28.325 CC lib/event/scheduler_static.o 00:02:28.582 LIB libspdk_accel.a 00:02:28.582 LIB libspdk_nvme.a 00:02:28.582 SO libspdk_accel.so.16.0 00:02:28.582 SYMLINK libspdk_accel.so 00:02:28.582 LIB libspdk_event.a 00:02:28.582 SO libspdk_nvme.so.13.1 00:02:28.582 SO libspdk_event.so.14.0 00:02:28.839 SYMLINK libspdk_event.so 00:02:28.839 CC lib/bdev/bdev.o 00:02:28.839 CC lib/bdev/bdev_rpc.o 00:02:28.839 CC lib/bdev/bdev_zone.o 00:02:28.839 CC lib/bdev/part.o 00:02:28.839 CC lib/bdev/scsi_nvme.o 00:02:29.096 SYMLINK libspdk_nvme.so 00:02:30.473 LIB libspdk_blob.a 00:02:30.473 SO libspdk_blob.so.11.0 00:02:30.732 SYMLINK libspdk_blob.so 00:02:30.990 CC lib/blobfs/blobfs.o 00:02:30.990 CC lib/blobfs/tree.o 00:02:30.990 CC lib/lvol/lvol.o 00:02:31.557 LIB libspdk_lvol.a 00:02:31.557 LIB libspdk_bdev.a 00:02:31.557 SO libspdk_lvol.so.10.0 00:02:31.557 SO libspdk_bdev.so.16.0 00:02:31.557 SYMLINK libspdk_lvol.so 00:02:31.817 LIB libspdk_blobfs.a 00:02:31.817 SYMLINK libspdk_bdev.so 00:02:31.817 SO libspdk_blobfs.so.10.0 00:02:31.817 SYMLINK libspdk_blobfs.so 00:02:32.075 CC lib/nbd/nbd.o 00:02:32.075 CC lib/ublk/ublk.o 00:02:32.075 CC lib/nbd/nbd_rpc.o 00:02:32.075 CC lib/ublk/ublk_rpc.o 00:02:32.075 CC lib/scsi/dev.o 00:02:32.075 CC lib/ftl/ftl_core.o 00:02:32.075 CC lib/scsi/lun.o 00:02:32.075 CC lib/ftl/ftl_init.o 00:02:32.075 CC lib/nvmf/ctrlr.o 00:02:32.075 CC lib/scsi/port.o 00:02:32.075 CC lib/ftl/ftl_layout.o 00:02:32.075 CC lib/scsi/scsi.o 00:02:32.076 CC lib/nvmf/ctrlr_discovery.o 00:02:32.076 CC lib/ftl/ftl_debug.o 00:02:32.076 CC lib/nvmf/ctrlr_bdev.o 00:02:32.076 CC lib/scsi/scsi_bdev.o 00:02:32.076 CC lib/ftl/ftl_io.o 00:02:32.076 CC lib/nvmf/subsystem.o 00:02:32.076 CC lib/scsi/scsi_pr.o 00:02:32.076 CC lib/nvmf/nvmf.o 00:02:32.076 CC lib/scsi/scsi_rpc.o 00:02:32.076 CC lib/ftl/ftl_sb.o 00:02:32.076 CC lib/scsi/task.o 00:02:32.076 CC lib/ftl/ftl_l2p.o 00:02:32.076 CC lib/nvmf/nvmf_rpc.o 00:02:32.076 CC lib/nvmf/transport.o 00:02:32.076 CC lib/ftl/ftl_l2p_flat.o 00:02:32.076 CC lib/nvmf/tcp.o 00:02:32.076 CC lib/ftl/ftl_nv_cache.o 00:02:32.076 CC lib/ftl/ftl_band.o 00:02:32.076 CC lib/nvmf/stubs.o 00:02:32.076 CC lib/ftl/ftl_band_ops.o 00:02:32.076 CC lib/nvmf/mdns_server.o 00:02:32.076 CC lib/ftl/ftl_writer.o 00:02:32.076 CC lib/nvmf/vfio_user.o 00:02:32.076 CC lib/nvmf/rdma.o 00:02:32.076 CC lib/ftl/ftl_rq.o 00:02:32.076 CC lib/nvmf/auth.o 00:02:32.076 CC lib/ftl/ftl_reloc.o 00:02:32.076 CC lib/ftl/ftl_l2p_cache.o 00:02:32.076 CC lib/ftl/ftl_p2l.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.076 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.076 CC lib/ftl/utils/ftl_conf.o 00:02:32.076 CC lib/ftl/utils/ftl_md.o 00:02:32.076 CC lib/ftl/utils/ftl_property.o 00:02:32.076 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.076 CC lib/ftl/utils/ftl_mempool.o 00:02:32.076 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.076 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.076 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.076 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.076 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.076 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:32.076 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.076 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.076 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.076 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.076 CC lib/ftl/base/ftl_base_dev.o 00:02:32.076 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.076 CC lib/ftl/base/ftl_base_bdev.o 00:02:32.076 CC lib/ftl/ftl_trace.o 00:02:32.678 LIB libspdk_nbd.a 00:02:32.678 SO libspdk_nbd.so.7.0 00:02:32.936 SYMLINK libspdk_nbd.so 00:02:32.936 LIB libspdk_scsi.a 00:02:32.936 SO libspdk_scsi.so.9.0 00:02:32.936 LIB libspdk_ublk.a 00:02:32.936 SO libspdk_ublk.so.3.0 00:02:33.194 SYMLINK libspdk_scsi.so 00:02:33.194 SYMLINK libspdk_ublk.so 00:02:33.453 LIB libspdk_ftl.a 00:02:33.453 CC lib/iscsi/conn.o 00:02:33.453 CC lib/iscsi/init_grp.o 00:02:33.453 CC lib/iscsi/iscsi.o 00:02:33.453 CC lib/iscsi/param.o 00:02:33.453 CC lib/iscsi/md5.o 00:02:33.453 CC lib/iscsi/portal_grp.o 00:02:33.453 CC lib/vhost/vhost.o 00:02:33.453 CC lib/iscsi/tgt_node.o 00:02:33.453 CC lib/vhost/vhost_rpc.o 00:02:33.453 CC lib/iscsi/iscsi_subsystem.o 00:02:33.453 CC lib/vhost/vhost_scsi.o 00:02:33.453 CC lib/iscsi/iscsi_rpc.o 00:02:33.453 CC lib/vhost/vhost_blk.o 00:02:33.453 CC lib/iscsi/task.o 00:02:33.453 CC lib/vhost/rte_vhost_user.o 00:02:33.453 SO libspdk_ftl.so.9.0 00:02:34.020 SYMLINK libspdk_ftl.so 00:02:34.588 LIB libspdk_nvmf.a 00:02:34.588 LIB libspdk_vhost.a 00:02:34.588 SO libspdk_nvmf.so.19.0 00:02:34.588 SO libspdk_vhost.so.8.0 00:02:34.847 SYMLINK libspdk_vhost.so 00:02:34.847 SYMLINK libspdk_nvmf.so 00:02:34.847 LIB libspdk_iscsi.a 00:02:34.847 SO libspdk_iscsi.so.8.0 00:02:35.106 SYMLINK libspdk_iscsi.so 00:02:35.675 CC module/vfu_device/vfu_virtio.o 00:02:35.675 CC module/vfu_device/vfu_virtio_blk.o 00:02:35.675 CC module/vfu_device/vfu_virtio_scsi.o 00:02:35.675 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.675 CC module/vfu_device/vfu_virtio_rpc.o 00:02:35.675 CC module/keyring/file/keyring.o 00:02:35.675 CC module/accel/dsa/accel_dsa.o 00:02:35.675 CC module/keyring/file/keyring_rpc.o 00:02:35.675 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.675 CC module/accel/ioat/accel_ioat.o 00:02:35.675 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.675 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.675 CC module/accel/iaa/accel_iaa.o 00:02:35.675 CC module/sock/posix/posix.o 00:02:35.675 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.675 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.675 CC module/accel/error/accel_error.o 00:02:35.675 CC module/accel/error/accel_error_rpc.o 00:02:35.675 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.675 CC module/blob/bdev/blob_bdev.o 00:02:35.675 CC module/keyring/linux/keyring.o 00:02:35.675 CC module/keyring/linux/keyring_rpc.o 00:02:35.675 LIB libspdk_env_dpdk_rpc.a 00:02:35.675 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.934 SYMLINK libspdk_env_dpdk_rpc.so 00:02:35.934 LIB libspdk_keyring_linux.a 00:02:35.934 LIB libspdk_keyring_file.a 00:02:35.934 LIB libspdk_scheduler_gscheduler.a 00:02:35.934 SO libspdk_keyring_linux.so.1.0 00:02:35.934 LIB libspdk_accel_error.a 00:02:35.934 SO libspdk_keyring_file.so.1.0 00:02:35.934 LIB libspdk_accel_ioat.a 00:02:35.934 SO libspdk_scheduler_gscheduler.so.4.0 00:02:35.934 LIB libspdk_scheduler_dynamic.a 00:02:35.934 LIB libspdk_accel_iaa.a 00:02:35.934 SO libspdk_accel_error.so.2.0 00:02:35.934 SO libspdk_accel_ioat.so.6.0 00:02:35.934 SYMLINK libspdk_keyring_linux.so 00:02:35.934 SO libspdk_scheduler_dynamic.so.4.0 00:02:35.934 SO libspdk_accel_iaa.so.3.0 00:02:35.934 SYMLINK libspdk_keyring_file.so 00:02:35.934 LIB libspdk_accel_dsa.a 00:02:36.200 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.200 LIB libspdk_blob_bdev.a 00:02:36.200 SYMLINK libspdk_accel_ioat.so 00:02:36.200 SYMLINK libspdk_accel_error.so 00:02:36.200 SO libspdk_accel_dsa.so.5.0 00:02:36.200 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.200 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.200 SYMLINK libspdk_accel_iaa.so 00:02:36.200 SO libspdk_blob_bdev.so.11.0 00:02:36.200 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.200 SYMLINK libspdk_accel_dsa.so 00:02:36.200 SYMLINK libspdk_blob_bdev.so 00:02:36.200 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.460 LIB libspdk_sock_posix.a 00:02:36.460 SO libspdk_sock_posix.so.6.0 00:02:36.719 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.719 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.719 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.719 CC module/bdev/error/vbdev_error.o 00:02:36.719 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.719 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.719 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.719 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.719 SYMLINK libspdk_sock_posix.so 00:02:36.719 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.719 CC module/bdev/delay/vbdev_delay.o 00:02:36.719 CC module/bdev/aio/bdev_aio.o 00:02:36.719 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.719 CC module/bdev/split/vbdev_split.o 00:02:36.719 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.719 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.719 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.719 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.719 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.719 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.719 CC module/bdev/gpt/gpt.o 00:02:36.719 CC module/bdev/malloc/bdev_malloc.o 00:02:36.719 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.719 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.719 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.719 CC module/bdev/nvme/bdev_nvme.o 00:02:36.719 CC module/bdev/null/bdev_null.o 00:02:36.719 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.719 CC module/bdev/raid/bdev_raid.o 00:02:36.719 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.719 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.719 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.719 CC module/bdev/null/bdev_null_rpc.o 00:02:36.719 CC module/bdev/nvme/nvme_rpc.o 00:02:36.719 CC module/bdev/ftl/bdev_ftl.o 00:02:36.719 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.719 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.719 CC module/bdev/raid/raid0.o 00:02:36.719 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.719 CC module/bdev/nvme/vbdev_opal.o 00:02:36.719 CC module/bdev/raid/raid1.o 00:02:36.719 CC module/bdev/raid/concat.o 00:02:36.719 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.977 LIB libspdk_blobfs_bdev.a 00:02:36.977 LIB libspdk_vfu_device.a 00:02:36.977 SO libspdk_blobfs_bdev.so.6.0 00:02:36.977 LIB libspdk_bdev_error.a 00:02:36.977 SO libspdk_vfu_device.so.3.0 00:02:36.977 SO libspdk_bdev_error.so.6.0 00:02:36.977 LIB libspdk_bdev_split.a 00:02:36.977 LIB libspdk_bdev_null.a 00:02:36.977 SYMLINK libspdk_blobfs_bdev.so 00:02:36.977 SO libspdk_bdev_split.so.6.0 00:02:36.977 SYMLINK libspdk_vfu_device.so 00:02:36.977 LIB libspdk_bdev_gpt.a 00:02:36.977 LIB libspdk_bdev_ftl.a 00:02:36.977 SO libspdk_bdev_null.so.6.0 00:02:36.977 LIB libspdk_bdev_passthru.a 00:02:36.977 SYMLINK libspdk_bdev_error.so 00:02:36.977 LIB libspdk_bdev_aio.a 00:02:36.977 SO libspdk_bdev_ftl.so.6.0 00:02:36.977 SO libspdk_bdev_gpt.so.6.0 00:02:36.977 LIB libspdk_bdev_zone_block.a 00:02:36.977 SO libspdk_bdev_passthru.so.6.0 00:02:37.236 LIB libspdk_bdev_delay.a 00:02:37.236 SYMLINK libspdk_bdev_split.so 00:02:37.236 SO libspdk_bdev_aio.so.6.0 00:02:37.236 LIB libspdk_bdev_iscsi.a 00:02:37.236 SYMLINK libspdk_bdev_null.so 00:02:37.236 SO libspdk_bdev_delay.so.6.0 00:02:37.236 SO libspdk_bdev_zone_block.so.6.0 00:02:37.236 LIB libspdk_bdev_malloc.a 00:02:37.236 SYMLINK libspdk_bdev_ftl.so 00:02:37.236 SO libspdk_bdev_iscsi.so.6.0 00:02:37.236 SYMLINK libspdk_bdev_gpt.so 00:02:37.236 SO libspdk_bdev_malloc.so.6.0 00:02:37.236 SYMLINK libspdk_bdev_aio.so 00:02:37.236 SYMLINK libspdk_bdev_zone_block.so 00:02:37.236 SYMLINK libspdk_bdev_passthru.so 00:02:37.236 SYMLINK libspdk_bdev_iscsi.so 00:02:37.236 SYMLINK libspdk_bdev_delay.so 00:02:37.236 SYMLINK libspdk_bdev_malloc.so 00:02:37.236 LIB libspdk_bdev_virtio.a 00:02:37.236 LIB libspdk_bdev_lvol.a 00:02:37.236 SO libspdk_bdev_virtio.so.6.0 00:02:37.236 SO libspdk_bdev_lvol.so.6.0 00:02:37.495 SYMLINK libspdk_bdev_virtio.so 00:02:37.495 SYMLINK libspdk_bdev_lvol.so 00:02:37.754 LIB libspdk_bdev_raid.a 00:02:37.754 SO libspdk_bdev_raid.so.6.0 00:02:38.012 SYMLINK libspdk_bdev_raid.so 00:02:38.949 LIB libspdk_bdev_nvme.a 00:02:38.949 SO libspdk_bdev_nvme.so.7.0 00:02:39.208 SYMLINK libspdk_bdev_nvme.so 00:02:39.776 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.776 CC module/event/subsystems/vmd/vmd.o 00:02:39.776 CC module/event/subsystems/sock/sock.o 00:02:39.776 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.776 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.776 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.776 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:39.776 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.776 CC module/event/subsystems/keyring/keyring.o 00:02:40.035 LIB libspdk_event_keyring.a 00:02:40.035 LIB libspdk_event_vhost_blk.a 00:02:40.035 LIB libspdk_event_scheduler.a 00:02:40.035 LIB libspdk_event_sock.a 00:02:40.035 SO libspdk_event_keyring.so.1.0 00:02:40.035 LIB libspdk_event_vfu_tgt.a 00:02:40.035 LIB libspdk_event_vmd.a 00:02:40.035 LIB libspdk_event_iobuf.a 00:02:40.035 SO libspdk_event_scheduler.so.4.0 00:02:40.035 SO libspdk_event_vhost_blk.so.3.0 00:02:40.035 SO libspdk_event_sock.so.5.0 00:02:40.035 SO libspdk_event_vfu_tgt.so.3.0 00:02:40.035 SO libspdk_event_vmd.so.6.0 00:02:40.035 SO libspdk_event_iobuf.so.3.0 00:02:40.035 SYMLINK libspdk_event_keyring.so 00:02:40.035 SYMLINK libspdk_event_scheduler.so 00:02:40.035 SYMLINK libspdk_event_vhost_blk.so 00:02:40.035 SYMLINK libspdk_event_vfu_tgt.so 00:02:40.035 SYMLINK libspdk_event_sock.so 00:02:40.035 SYMLINK libspdk_event_vmd.so 00:02:40.035 SYMLINK libspdk_event_iobuf.so 00:02:40.603 CC module/event/subsystems/accel/accel.o 00:02:40.603 LIB libspdk_event_accel.a 00:02:40.603 SO libspdk_event_accel.so.6.0 00:02:40.603 SYMLINK libspdk_event_accel.so 00:02:41.171 CC module/event/subsystems/bdev/bdev.o 00:02:41.171 LIB libspdk_event_bdev.a 00:02:41.171 SO libspdk_event_bdev.so.6.0 00:02:41.430 SYMLINK libspdk_event_bdev.so 00:02:41.689 CC module/event/subsystems/ublk/ublk.o 00:02:41.689 CC module/event/subsystems/scsi/scsi.o 00:02:41.689 CC module/event/subsystems/nbd/nbd.o 00:02:41.689 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.689 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.948 LIB libspdk_event_nbd.a 00:02:41.948 LIB libspdk_event_scsi.a 00:02:41.948 LIB libspdk_event_ublk.a 00:02:41.948 SO libspdk_event_nbd.so.6.0 00:02:41.948 SO libspdk_event_scsi.so.6.0 00:02:41.948 SO libspdk_event_ublk.so.3.0 00:02:41.948 LIB libspdk_event_nvmf.a 00:02:41.948 SYMLINK libspdk_event_nbd.so 00:02:41.948 SYMLINK libspdk_event_scsi.so 00:02:41.948 SYMLINK libspdk_event_ublk.so 00:02:41.948 SO libspdk_event_nvmf.so.6.0 00:02:41.948 SYMLINK libspdk_event_nvmf.so 00:02:42.209 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.209 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.512 LIB libspdk_event_vhost_scsi.a 00:02:42.512 LIB libspdk_event_iscsi.a 00:02:42.512 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.512 SO libspdk_event_iscsi.so.6.0 00:02:42.512 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.512 SYMLINK libspdk_event_iscsi.so 00:02:42.772 SO libspdk.so.6.0 00:02:42.772 SYMLINK libspdk.so 00:02:43.032 CXX app/trace/trace.o 00:02:43.032 TEST_HEADER include/spdk/accel.h 00:02:43.032 TEST_HEADER include/spdk/accel_module.h 00:02:43.033 CC test/rpc_client/rpc_client_test.o 00:02:43.033 CC app/spdk_lspci/spdk_lspci.o 00:02:43.033 CC app/spdk_top/spdk_top.o 00:02:43.033 CC app/trace_record/trace_record.o 00:02:43.033 TEST_HEADER include/spdk/barrier.h 00:02:43.033 TEST_HEADER include/spdk/bdev.h 00:02:43.033 TEST_HEADER include/spdk/assert.h 00:02:43.033 TEST_HEADER include/spdk/base64.h 00:02:43.033 TEST_HEADER include/spdk/bdev_module.h 00:02:43.033 CC app/spdk_nvme_perf/perf.o 00:02:43.033 TEST_HEADER include/spdk/bit_array.h 00:02:43.033 TEST_HEADER include/spdk/bdev_zone.h 00:02:43.033 CC app/spdk_nvme_identify/identify.o 00:02:43.033 TEST_HEADER include/spdk/bit_pool.h 00:02:43.033 TEST_HEADER include/spdk/blob_bdev.h 00:02:43.033 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:43.033 TEST_HEADER include/spdk/blob.h 00:02:43.033 TEST_HEADER include/spdk/blobfs.h 00:02:43.033 TEST_HEADER include/spdk/conf.h 00:02:43.033 TEST_HEADER include/spdk/config.h 00:02:43.033 TEST_HEADER include/spdk/cpuset.h 00:02:43.033 TEST_HEADER include/spdk/crc16.h 00:02:43.033 TEST_HEADER include/spdk/crc32.h 00:02:43.033 TEST_HEADER include/spdk/dif.h 00:02:43.033 TEST_HEADER include/spdk/crc64.h 00:02:43.033 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.033 TEST_HEADER include/spdk/dma.h 00:02:43.033 TEST_HEADER include/spdk/endian.h 00:02:43.033 TEST_HEADER include/spdk/env.h 00:02:43.033 TEST_HEADER include/spdk/env_dpdk.h 00:02:43.033 TEST_HEADER include/spdk/fd_group.h 00:02:43.033 TEST_HEADER include/spdk/fd.h 00:02:43.033 TEST_HEADER include/spdk/event.h 00:02:43.033 TEST_HEADER include/spdk/file.h 00:02:43.033 TEST_HEADER include/spdk/ftl.h 00:02:43.033 TEST_HEADER include/spdk/gpt_spec.h 00:02:43.033 TEST_HEADER include/spdk/hexlify.h 00:02:43.033 TEST_HEADER include/spdk/idxd.h 00:02:43.033 TEST_HEADER include/spdk/histogram_data.h 00:02:43.033 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.033 TEST_HEADER include/spdk/ioat.h 00:02:43.033 TEST_HEADER include/spdk/init.h 00:02:43.033 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.033 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.033 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.033 TEST_HEADER include/spdk/json.h 00:02:43.033 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.033 TEST_HEADER include/spdk/keyring.h 00:02:43.033 TEST_HEADER include/spdk/keyring_module.h 00:02:43.033 TEST_HEADER include/spdk/likely.h 00:02:43.033 TEST_HEADER include/spdk/log.h 00:02:43.033 TEST_HEADER include/spdk/lvol.h 00:02:43.033 TEST_HEADER include/spdk/memory.h 00:02:43.033 TEST_HEADER include/spdk/mmio.h 00:02:43.033 TEST_HEADER include/spdk/nbd.h 00:02:43.033 TEST_HEADER include/spdk/notify.h 00:02:43.033 TEST_HEADER include/spdk/net.h 00:02:43.033 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.033 TEST_HEADER include/spdk/nvme.h 00:02:43.033 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.033 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.033 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.033 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.033 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.033 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.033 TEST_HEADER include/spdk/nvmf.h 00:02:43.033 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.033 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.033 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.033 TEST_HEADER include/spdk/opal.h 00:02:43.033 TEST_HEADER include/spdk/opal_spec.h 00:02:43.033 TEST_HEADER include/spdk/pci_ids.h 00:02:43.033 TEST_HEADER include/spdk/pipe.h 00:02:43.033 TEST_HEADER include/spdk/reduce.h 00:02:43.033 TEST_HEADER include/spdk/queue.h 00:02:43.033 TEST_HEADER include/spdk/rpc.h 00:02:43.033 CC app/nvmf_tgt/nvmf_main.o 00:02:43.033 TEST_HEADER include/spdk/scheduler.h 00:02:43.033 TEST_HEADER include/spdk/scsi.h 00:02:43.033 CC app/spdk_dd/spdk_dd.o 00:02:43.033 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.033 TEST_HEADER include/spdk/stdinc.h 00:02:43.033 TEST_HEADER include/spdk/sock.h 00:02:43.033 TEST_HEADER include/spdk/thread.h 00:02:43.033 TEST_HEADER include/spdk/string.h 00:02:43.033 TEST_HEADER include/spdk/trace.h 00:02:43.033 TEST_HEADER include/spdk/trace_parser.h 00:02:43.033 TEST_HEADER include/spdk/tree.h 00:02:43.033 TEST_HEADER include/spdk/ublk.h 00:02:43.033 TEST_HEADER include/spdk/util.h 00:02:43.033 TEST_HEADER include/spdk/uuid.h 00:02:43.033 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.033 TEST_HEADER include/spdk/version.h 00:02:43.033 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.033 TEST_HEADER include/spdk/vhost.h 00:02:43.033 TEST_HEADER include/spdk/vmd.h 00:02:43.033 TEST_HEADER include/spdk/xor.h 00:02:43.033 TEST_HEADER include/spdk/zipf.h 00:02:43.303 CXX test/cpp_headers/accel.o 00:02:43.303 CXX test/cpp_headers/accel_module.o 00:02:43.303 CXX test/cpp_headers/assert.o 00:02:43.303 CXX test/cpp_headers/base64.o 00:02:43.303 CXX test/cpp_headers/barrier.o 00:02:43.303 CXX test/cpp_headers/bdev.o 00:02:43.303 CXX test/cpp_headers/bdev_module.o 00:02:43.303 CXX test/cpp_headers/bdev_zone.o 00:02:43.303 CXX test/cpp_headers/bit_array.o 00:02:43.303 CXX test/cpp_headers/bit_pool.o 00:02:43.303 CXX test/cpp_headers/blob_bdev.o 00:02:43.303 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.303 CXX test/cpp_headers/blobfs.o 00:02:43.303 CXX test/cpp_headers/conf.o 00:02:43.303 CXX test/cpp_headers/blob.o 00:02:43.303 CXX test/cpp_headers/config.o 00:02:43.303 CXX test/cpp_headers/crc16.o 00:02:43.303 CXX test/cpp_headers/cpuset.o 00:02:43.303 CXX test/cpp_headers/crc32.o 00:02:43.303 CC app/spdk_tgt/spdk_tgt.o 00:02:43.303 CXX test/cpp_headers/crc64.o 00:02:43.303 CXX test/cpp_headers/dma.o 00:02:43.303 CXX test/cpp_headers/dif.o 00:02:43.303 CXX test/cpp_headers/endian.o 00:02:43.303 CXX test/cpp_headers/env_dpdk.o 00:02:43.303 CXX test/cpp_headers/event.o 00:02:43.303 CXX test/cpp_headers/env.o 00:02:43.303 CXX test/cpp_headers/fd_group.o 00:02:43.303 CXX test/cpp_headers/file.o 00:02:43.303 CXX test/cpp_headers/fd.o 00:02:43.303 CXX test/cpp_headers/ftl.o 00:02:43.303 CXX test/cpp_headers/gpt_spec.o 00:02:43.303 CXX test/cpp_headers/histogram_data.o 00:02:43.303 CXX test/cpp_headers/hexlify.o 00:02:43.303 CXX test/cpp_headers/idxd.o 00:02:43.303 CXX test/cpp_headers/idxd_spec.o 00:02:43.303 CXX test/cpp_headers/init.o 00:02:43.303 CXX test/cpp_headers/ioat_spec.o 00:02:43.303 CXX test/cpp_headers/ioat.o 00:02:43.303 CXX test/cpp_headers/iscsi_spec.o 00:02:43.303 CXX test/cpp_headers/json.o 00:02:43.303 CXX test/cpp_headers/jsonrpc.o 00:02:43.303 CXX test/cpp_headers/keyring_module.o 00:02:43.303 CXX test/cpp_headers/keyring.o 00:02:43.303 CXX test/cpp_headers/likely.o 00:02:43.303 CXX test/cpp_headers/log.o 00:02:43.303 CXX test/cpp_headers/lvol.o 00:02:43.303 CXX test/cpp_headers/mmio.o 00:02:43.303 CXX test/cpp_headers/memory.o 00:02:43.303 CXX test/cpp_headers/notify.o 00:02:43.303 CXX test/cpp_headers/net.o 00:02:43.303 CXX test/cpp_headers/nvme_intel.o 00:02:43.303 CXX test/cpp_headers/nbd.o 00:02:43.303 CXX test/cpp_headers/nvme.o 00:02:43.303 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.303 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.303 CXX test/cpp_headers/nvme_spec.o 00:02:43.303 CXX test/cpp_headers/nvme_zns.o 00:02:43.303 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:43.303 CXX test/cpp_headers/nvmf_cmd.o 00:02:43.303 CXX test/cpp_headers/nvmf.o 00:02:43.303 CXX test/cpp_headers/nvmf_spec.o 00:02:43.303 CXX test/cpp_headers/opal.o 00:02:43.303 CXX test/cpp_headers/nvmf_transport.o 00:02:43.303 CXX test/cpp_headers/opal_spec.o 00:02:43.303 CXX test/cpp_headers/pci_ids.o 00:02:43.303 CXX test/cpp_headers/pipe.o 00:02:43.303 CXX test/cpp_headers/queue.o 00:02:43.303 CXX test/cpp_headers/reduce.o 00:02:43.303 CXX test/cpp_headers/rpc.o 00:02:43.303 CXX test/cpp_headers/scheduler.o 00:02:43.303 CXX test/cpp_headers/scsi.o 00:02:43.303 CXX test/cpp_headers/scsi_spec.o 00:02:43.303 CXX test/cpp_headers/sock.o 00:02:43.303 CXX test/cpp_headers/stdinc.o 00:02:43.303 CXX test/cpp_headers/string.o 00:02:43.303 CXX test/cpp_headers/thread.o 00:02:43.303 CC examples/util/zipf/zipf.o 00:02:43.303 CC test/app/jsoncat/jsoncat.o 00:02:43.303 CXX test/cpp_headers/trace.o 00:02:43.303 CXX test/cpp_headers/trace_parser.o 00:02:43.303 CXX test/cpp_headers/tree.o 00:02:43.303 CXX test/cpp_headers/ublk.o 00:02:43.303 CXX test/cpp_headers/util.o 00:02:43.303 CXX test/cpp_headers/uuid.o 00:02:43.303 CC examples/ioat/verify/verify.o 00:02:43.303 CXX test/cpp_headers/version.o 00:02:43.303 CC examples/ioat/perf/perf.o 00:02:43.303 CC test/app/histogram_perf/histogram_perf.o 00:02:43.303 CC test/app/stub/stub.o 00:02:43.303 CC test/env/vtophys/vtophys.o 00:02:43.303 CC test/env/pci/pci_ut.o 00:02:43.304 CC test/env/memory/memory_ut.o 00:02:43.304 CC app/fio/nvme/fio_plugin.o 00:02:43.304 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.304 CC test/thread/poller_perf/poller_perf.o 00:02:43.304 CC test/app/bdev_svc/bdev_svc.o 00:02:43.304 CC test/dma/test_dma/test_dma.o 00:02:43.576 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.576 CXX test/cpp_headers/vfio_user_spec.o 00:02:43.576 LINK spdk_lspci 00:02:43.576 CC app/fio/bdev/fio_plugin.o 00:02:43.844 LINK rpc_client_test 00:02:43.844 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.844 CC test/env/mem_callbacks/mem_callbacks.o 00:02:44.102 LINK spdk_trace_record 00:02:44.103 LINK jsoncat 00:02:44.103 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.103 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.103 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.103 CXX test/cpp_headers/vhost.o 00:02:44.103 CXX test/cpp_headers/vmd.o 00:02:44.103 CXX test/cpp_headers/xor.o 00:02:44.103 CXX test/cpp_headers/zipf.o 00:02:44.103 LINK nvmf_tgt 00:02:44.103 LINK iscsi_tgt 00:02:44.103 LINK interrupt_tgt 00:02:44.103 LINK spdk_nvme_discover 00:02:44.103 LINK zipf 00:02:44.103 LINK histogram_perf 00:02:44.103 LINK poller_perf 00:02:44.103 LINK vtophys 00:02:44.103 LINK stub 00:02:44.103 LINK bdev_svc 00:02:44.103 LINK spdk_dd 00:02:44.363 LINK spdk_tgt 00:02:44.363 LINK env_dpdk_post_init 00:02:44.363 LINK spdk_trace 00:02:44.363 LINK ioat_perf 00:02:44.363 LINK verify 00:02:44.363 LINK pci_ut 00:02:44.363 LINK test_dma 00:02:44.621 LINK nvme_fuzz 00:02:44.621 LINK spdk_bdev 00:02:44.622 CC examples/sock/hello_world/hello_sock.o 00:02:44.622 LINK vhost_fuzz 00:02:44.622 LINK spdk_nvme 00:02:44.622 CC examples/idxd/perf/perf.o 00:02:44.622 CC examples/vmd/led/led.o 00:02:44.622 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.622 CC test/event/reactor/reactor.o 00:02:44.622 CC app/vhost/vhost.o 00:02:44.622 CC test/event/reactor_perf/reactor_perf.o 00:02:44.622 CC test/event/event_perf/event_perf.o 00:02:44.622 CC examples/thread/thread/thread_ex.o 00:02:44.622 LINK spdk_nvme_identify 00:02:44.622 CC test/event/app_repeat/app_repeat.o 00:02:44.622 CC test/event/scheduler/scheduler.o 00:02:44.622 LINK spdk_top 00:02:44.622 LINK mem_callbacks 00:02:44.880 LINK lsvmd 00:02:44.880 LINK led 00:02:44.880 LINK reactor_perf 00:02:44.880 LINK reactor 00:02:44.880 LINK event_perf 00:02:44.880 LINK hello_sock 00:02:44.880 LINK app_repeat 00:02:44.880 LINK vhost 00:02:44.880 LINK thread 00:02:45.139 LINK scheduler 00:02:45.139 LINK idxd_perf 00:02:45.139 CC test/nvme/connect_stress/connect_stress.o 00:02:45.139 CC test/nvme/compliance/nvme_compliance.o 00:02:45.139 CC test/nvme/reserve/reserve.o 00:02:45.139 CC test/nvme/aer/aer.o 00:02:45.139 CC test/nvme/simple_copy/simple_copy.o 00:02:45.139 CC test/nvme/err_injection/err_injection.o 00:02:45.139 CC test/nvme/e2edp/nvme_dp.o 00:02:45.139 CC test/nvme/fdp/fdp.o 00:02:45.139 CC test/nvme/boot_partition/boot_partition.o 00:02:45.139 CC test/nvme/sgl/sgl.o 00:02:45.139 CC test/nvme/fused_ordering/fused_ordering.o 00:02:45.139 CC test/nvme/reset/reset.o 00:02:45.139 CC test/nvme/startup/startup.o 00:02:45.139 CC test/nvme/overhead/overhead.o 00:02:45.139 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:45.139 CC test/nvme/cuse/cuse.o 00:02:45.139 CC test/accel/dif/dif.o 00:02:45.139 CC test/blobfs/mkfs/mkfs.o 00:02:45.139 CC test/lvol/esnap/esnap.o 00:02:45.139 LINK memory_ut 00:02:45.139 LINK connect_stress 00:02:45.397 LINK boot_partition 00:02:45.397 LINK err_injection 00:02:45.397 LINK startup 00:02:45.397 LINK reserve 00:02:45.397 LINK fused_ordering 00:02:45.397 LINK simple_copy 00:02:45.397 LINK aer 00:02:45.397 LINK nvme_dp 00:02:45.398 LINK reset 00:02:45.398 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:45.398 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:45.398 CC examples/nvme/arbitration/arbitration.o 00:02:45.398 CC examples/nvme/abort/abort.o 00:02:45.398 CC examples/nvme/hotplug/hotplug.o 00:02:45.398 LINK overhead 00:02:45.398 CC examples/nvme/hello_world/hello_world.o 00:02:45.398 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:45.398 LINK mkfs 00:02:45.398 CC examples/nvme/reconnect/reconnect.o 00:02:45.398 LINK spdk_nvme_perf 00:02:45.398 LINK nvme_compliance 00:02:45.398 LINK fdp 00:02:45.398 CC examples/accel/perf/accel_perf.o 00:02:45.398 LINK doorbell_aers 00:02:45.656 CC examples/blob/cli/blobcli.o 00:02:45.656 CC examples/blob/hello_world/hello_blob.o 00:02:45.656 LINK dif 00:02:45.656 LINK sgl 00:02:45.656 LINK pmr_persistence 00:02:45.656 LINK cmb_copy 00:02:45.656 LINK hotplug 00:02:45.656 LINK hello_world 00:02:45.656 LINK arbitration 00:02:45.914 LINK abort 00:02:45.914 LINK reconnect 00:02:45.914 LINK iscsi_fuzz 00:02:45.914 LINK hello_blob 00:02:45.914 LINK nvme_manage 00:02:45.914 LINK accel_perf 00:02:46.173 LINK blobcli 00:02:46.173 LINK cuse 00:02:46.173 CC test/bdev/bdevio/bdevio.o 00:02:46.430 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.430 CC examples/bdev/bdevperf/bdevperf.o 00:02:46.430 LINK bdevio 00:02:46.689 LINK hello_bdev 00:02:47.257 LINK bdevperf 00:02:47.825 CC examples/nvmf/nvmf/nvmf.o 00:02:48.394 LINK nvmf 00:02:50.298 LINK esnap 00:02:50.557 00:02:50.557 real 0m54.935s 00:02:50.557 user 8m28.750s 00:02:50.557 sys 4m17.780s 00:02:50.557 11:49:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:50.557 11:49:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.557 ************************************ 00:02:50.557 END TEST make 00:02:50.557 ************************************ 00:02:50.557 11:49:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.557 11:49:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.557 11:49:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.557 11:49:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.557 11:49:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.557 11:49:27 -- pm/common@44 -- $ pid=3816665 00:02:50.557 11:49:27 -- pm/common@50 -- $ kill -TERM 3816665 00:02:50.557 11:49:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.557 11:49:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.557 11:49:27 -- pm/common@44 -- $ pid=3816667 00:02:50.557 11:49:27 -- pm/common@50 -- $ kill -TERM 3816667 00:02:50.557 11:49:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.557 11:49:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:50.557 11:49:27 -- pm/common@44 -- $ pid=3816669 00:02:50.557 11:49:27 -- pm/common@50 -- $ kill -TERM 3816669 00:02:50.557 11:49:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.557 11:49:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:50.557 11:49:27 -- pm/common@44 -- $ pid=3816691 00:02:50.557 11:49:27 -- pm/common@50 -- $ sudo -E kill -TERM 3816691 00:02:50.817 11:49:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.817 11:49:27 -- nvmf/common.sh@7 -- # uname -s 00:02:50.817 11:49:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.817 11:49:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.817 11:49:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.817 11:49:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.817 11:49:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.817 11:49:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.817 11:49:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.818 11:49:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.818 11:49:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.818 11:49:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.818 11:49:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:50.818 11:49:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:50.818 11:49:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.818 11:49:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.818 11:49:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:50.818 11:49:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.818 11:49:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:50.818 11:49:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.818 11:49:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.818 11:49:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.818 11:49:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.818 11:49:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.818 11:49:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.818 11:49:27 -- paths/export.sh@5 -- # export PATH 00:02:50.818 11:49:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.818 11:49:27 -- nvmf/common.sh@47 -- # : 0 00:02:50.818 11:49:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:50.818 11:49:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:50.818 11:49:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.818 11:49:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.818 11:49:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.818 11:49:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:50.818 11:49:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:50.818 11:49:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:50.818 11:49:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.818 11:49:27 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.818 11:49:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.818 11:49:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.818 11:49:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.818 11:49:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.818 11:49:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.818 11:49:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.818 11:49:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.818 11:49:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.818 11:49:28 -- spdk/autotest.sh@48 -- # udevadm_pid=3879369 00:02:50.818 11:49:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.818 11:49:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.818 11:49:28 -- pm/common@17 -- # local monitor 00:02:50.818 11:49:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.818 11:49:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.818 11:49:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.818 11:49:28 -- pm/common@21 -- # date +%s 00:02:50.818 11:49:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.818 11:49:28 -- pm/common@21 -- # date +%s 00:02:50.818 11:49:28 -- pm/common@25 -- # sleep 1 00:02:50.818 11:49:28 -- pm/common@21 -- # date +%s 00:02:50.818 11:49:28 -- pm/common@21 -- # date +%s 00:02:50.818 11:49:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900968 00:02:50.818 11:49:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900968 00:02:50.818 11:49:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900968 00:02:50.818 11:49:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900968 00:02:50.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900968_collect-vmstat.pm.log 00:02:50.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900968_collect-cpu-temp.pm.log 00:02:50.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900968_collect-cpu-load.pm.log 00:02:50.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900968_collect-bmc-pm.bmc.pm.log 00:02:51.756 11:49:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.756 11:49:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:51.757 11:49:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:51.757 11:49:29 -- common/autotest_common.sh@10 -- # set +x 00:02:51.757 11:49:29 -- spdk/autotest.sh@59 -- # create_test_list 00:02:51.757 11:49:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:51.757 11:49:29 -- common/autotest_common.sh@10 -- # set +x 00:02:52.016 11:49:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:52.016 11:49:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.016 11:49:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.016 11:49:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:52.016 11:49:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.016 11:49:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:52.016 11:49:29 -- common/autotest_common.sh@1455 -- # uname 00:02:52.016 11:49:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:52.016 11:49:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:52.016 11:49:29 -- common/autotest_common.sh@1475 -- # uname 00:02:52.016 11:49:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:52.016 11:49:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:52.016 11:49:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:52.016 11:49:29 -- spdk/autotest.sh@72 -- # hash lcov 00:02:52.016 11:49:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:52.016 11:49:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:52.016 --rc lcov_branch_coverage=1 00:02:52.016 --rc lcov_function_coverage=1 00:02:52.016 --rc genhtml_branch_coverage=1 00:02:52.016 --rc genhtml_function_coverage=1 00:02:52.016 --rc genhtml_legend=1 00:02:52.017 --rc geninfo_all_blocks=1 00:02:52.017 ' 00:02:52.017 11:49:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:52.017 --rc lcov_branch_coverage=1 00:02:52.017 --rc lcov_function_coverage=1 00:02:52.017 --rc genhtml_branch_coverage=1 00:02:52.017 --rc genhtml_function_coverage=1 00:02:52.017 --rc genhtml_legend=1 00:02:52.017 --rc geninfo_all_blocks=1 00:02:52.017 ' 00:02:52.017 11:49:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:52.017 --rc lcov_branch_coverage=1 00:02:52.017 --rc lcov_function_coverage=1 00:02:52.017 --rc genhtml_branch_coverage=1 00:02:52.017 --rc genhtml_function_coverage=1 00:02:52.017 --rc genhtml_legend=1 00:02:52.017 --rc geninfo_all_blocks=1 00:02:52.017 --no-external' 00:02:52.017 11:49:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:52.017 --rc lcov_branch_coverage=1 00:02:52.017 --rc lcov_function_coverage=1 00:02:52.017 --rc genhtml_branch_coverage=1 00:02:52.017 --rc genhtml_function_coverage=1 00:02:52.017 --rc genhtml_legend=1 00:02:52.017 --rc geninfo_all_blocks=1 00:02:52.017 --no-external' 00:02:52.017 11:49:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:52.017 lcov: LCOV version 1.14 00:02:52.017 11:49:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:53.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:53.923 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:53.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:53.923 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:53.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:53.923 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:53.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:53.923 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:53.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:53.923 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:53.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:53.923 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:53.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:53.924 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:54.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:54.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:09.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:09.065 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:27.155 11:50:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:27.155 11:50:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:27.155 11:50:03 -- common/autotest_common.sh@10 -- # set +x 00:03:27.155 11:50:03 -- spdk/autotest.sh@91 -- # rm -f 00:03:27.155 11:50:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.731 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:03:29.731 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:29.731 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:29.731 11:50:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:29.731 11:50:06 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:29.731 11:50:06 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:29.731 11:50:06 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:29.731 11:50:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:29.731 11:50:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:29.731 11:50:06 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:29.731 11:50:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.731 11:50:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:29.731 11:50:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:29.731 11:50:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.731 11:50:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:29.731 11:50:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:29.731 11:50:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:29.731 11:50:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:29.731 No valid GPT data, bailing 00:03:29.731 11:50:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:29.731 11:50:06 -- scripts/common.sh@391 -- # pt= 00:03:29.731 11:50:06 -- scripts/common.sh@392 -- # return 1 00:03:29.731 11:50:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:29.731 1+0 records in 00:03:29.731 1+0 records out 00:03:29.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398313 s, 263 MB/s 00:03:29.731 11:50:06 -- spdk/autotest.sh@118 -- # sync 00:03:29.731 11:50:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:29.731 11:50:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:29.731 11:50:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:36.304 11:50:12 -- spdk/autotest.sh@124 -- # uname -s 00:03:36.304 11:50:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:36.304 11:50:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:36.304 11:50:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.304 11:50:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.304 11:50:12 -- common/autotest_common.sh@10 -- # set +x 00:03:36.304 ************************************ 00:03:36.304 START TEST setup.sh 00:03:36.304 ************************************ 00:03:36.304 11:50:12 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:36.304 * Looking for test storage... 00:03:36.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.304 11:50:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:36.304 11:50:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:36.304 11:50:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:36.304 11:50:12 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.304 11:50:12 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.304 11:50:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.304 ************************************ 00:03:36.304 START TEST acl 00:03:36.304 ************************************ 00:03:36.304 11:50:12 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:36.304 * Looking for test storage... 00:03:36.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.304 11:50:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.304 11:50:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.304 11:50:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:36.304 11:50:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:36.304 11:50:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:36.304 11:50:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:36.304 11:50:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:36.304 11:50:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.304 11:50:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.842 11:50:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:38.842 11:50:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:38.842 11:50:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.842 11:50:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:38.842 11:50:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.842 11:50:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:42.135 Hugepages 00:03:42.135 node hugesize free / total 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 00:03:42.135 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:42.135 11:50:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:42.135 11:50:18 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.135 11:50:18 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.135 11:50:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.135 ************************************ 00:03:42.135 START TEST denied 00:03:42.135 ************************************ 00:03:42.135 11:50:18 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:42.135 11:50:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:03:42.135 11:50:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:42.135 11:50:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:03:42.135 11:50:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.135 11:50:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.424 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.424 11:50:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.660 00:03:49.660 real 0m7.292s 00:03:49.660 user 0m2.319s 00:03:49.660 sys 0m4.246s 00:03:49.660 11:50:26 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.660 11:50:26 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:49.660 ************************************ 00:03:49.660 END TEST denied 00:03:49.660 ************************************ 00:03:49.660 11:50:26 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:49.660 11:50:26 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.660 11:50:26 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.660 11:50:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:49.660 ************************************ 00:03:49.660 START TEST allowed 00:03:49.660 ************************************ 00:03:49.660 11:50:26 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:49.660 11:50:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:49.660 11:50:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:49.660 11:50:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.660 11:50:26 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.660 11:50:26 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:52.960 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.960 11:50:30 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:52.960 11:50:30 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:52.960 11:50:30 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:52.960 11:50:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.960 11:50:30 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.262 00:03:56.262 real 0m7.170s 00:03:56.262 user 0m2.171s 00:03:56.262 sys 0m4.083s 00:03:56.262 11:50:33 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.262 11:50:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:56.262 ************************************ 00:03:56.263 END TEST allowed 00:03:56.263 ************************************ 00:03:56.263 00:03:56.263 real 0m20.513s 00:03:56.263 user 0m6.614s 00:03:56.263 sys 0m12.452s 00:03:56.263 11:50:33 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.263 11:50:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:56.263 ************************************ 00:03:56.263 END TEST acl 00:03:56.263 ************************************ 00:03:56.263 11:50:33 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:56.263 11:50:33 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.263 11:50:33 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.263 11:50:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.529 ************************************ 00:03:56.529 START TEST hugepages 00:03:56.529 ************************************ 00:03:56.529 11:50:33 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:56.529 * Looking for test storage... 00:03:56.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69060260 kB' 'MemAvailable: 72930488 kB' 'Buffers: 2724 kB' 'Cached: 14763892 kB' 'SwapCached: 0 kB' 'Active: 11654816 kB' 'Inactive: 3702288 kB' 'Active(anon): 11201088 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593924 kB' 'Mapped: 217300 kB' 'Shmem: 10610600 kB' 'KReclaimable: 550172 kB' 'Slab: 1239388 kB' 'SReclaimable: 550172 kB' 'SUnreclaim: 689216 kB' 'KernelStack: 22688 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 12701560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220524 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.529 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:56.530 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:56.531 11:50:33 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:56.531 11:50:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.531 11:50:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.531 11:50:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.531 ************************************ 00:03:56.531 START TEST default_setup 00:03:56.531 ************************************ 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.531 11:50:33 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.823 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:59.823 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.393 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71210568 kB' 'MemAvailable: 75080764 kB' 'Buffers: 2724 kB' 'Cached: 14763996 kB' 'SwapCached: 0 kB' 'Active: 11674432 kB' 'Inactive: 3702288 kB' 'Active(anon): 11220704 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613348 kB' 'Mapped: 217236 kB' 'Shmem: 10610704 kB' 'KReclaimable: 550140 kB' 'Slab: 1238112 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687972 kB' 'KernelStack: 23024 kB' 'PageTables: 10048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220700 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.393 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.658 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71211328 kB' 'MemAvailable: 75081524 kB' 'Buffers: 2724 kB' 'Cached: 14764000 kB' 'SwapCached: 0 kB' 'Active: 11673872 kB' 'Inactive: 3702288 kB' 'Active(anon): 11220144 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612784 kB' 'Mapped: 217196 kB' 'Shmem: 10610708 kB' 'KReclaimable: 550140 kB' 'Slab: 1238104 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687964 kB' 'KernelStack: 22960 kB' 'PageTables: 9684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220652 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.659 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.660 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71210240 kB' 'MemAvailable: 75080436 kB' 'Buffers: 2724 kB' 'Cached: 14764016 kB' 'SwapCached: 0 kB' 'Active: 11678792 kB' 'Inactive: 3702288 kB' 'Active(anon): 11225064 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618096 kB' 'Mapped: 217700 kB' 'Shmem: 10610724 kB' 'KReclaimable: 550140 kB' 'Slab: 1238160 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688020 kB' 'KernelStack: 22992 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12728108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220624 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.662 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.664 nr_hugepages=1024 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.664 resv_hugepages=0 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.664 surplus_hugepages=0 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.664 anon_hugepages=0 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71214368 kB' 'MemAvailable: 75084564 kB' 'Buffers: 2724 kB' 'Cached: 14764040 kB' 'SwapCached: 0 kB' 'Active: 11673116 kB' 'Inactive: 3702288 kB' 'Active(anon): 11219388 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611928 kB' 'Mapped: 217560 kB' 'Shmem: 10610748 kB' 'KReclaimable: 550140 kB' 'Slab: 1238128 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687988 kB' 'KernelStack: 22800 kB' 'PageTables: 9448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220604 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.664 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 37451716 kB' 'MemUsed: 10616680 kB' 'SwapCached: 0 kB' 'Active: 6315108 kB' 'Inactive: 438924 kB' 'Active(anon): 6006744 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6396296 kB' 'Mapped: 53192 kB' 'AnonPages: 360912 kB' 'Shmem: 5649008 kB' 'KernelStack: 13800 kB' 'PageTables: 5652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 677928 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 332060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.667 node0=1024 expecting 1024 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.667 00:04:00.667 real 0m4.145s 00:04:00.667 user 0m1.349s 00:04:00.667 sys 0m2.013s 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.667 11:50:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:00.667 ************************************ 00:04:00.667 END TEST default_setup 00:04:00.667 ************************************ 00:04:00.667 11:50:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:00.667 11:50:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.667 11:50:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.667 11:50:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.927 ************************************ 00:04:00.927 START TEST per_node_1G_alloc 00:04:00.927 ************************************ 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.927 11:50:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.469 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.469 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.469 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71222188 kB' 'MemAvailable: 75092384 kB' 'Buffers: 2724 kB' 'Cached: 14764144 kB' 'SwapCached: 0 kB' 'Active: 11675536 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221808 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614412 kB' 'Mapped: 217280 kB' 'Shmem: 10610852 kB' 'KReclaimable: 550140 kB' 'Slab: 1238128 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687988 kB' 'KernelStack: 22864 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12722636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220940 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.735 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71222692 kB' 'MemAvailable: 75092888 kB' 'Buffers: 2724 kB' 'Cached: 14764148 kB' 'SwapCached: 0 kB' 'Active: 11675456 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221728 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614204 kB' 'Mapped: 217280 kB' 'Shmem: 10610856 kB' 'KReclaimable: 550140 kB' 'Slab: 1238172 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688032 kB' 'KernelStack: 23008 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220876 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.737 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71222328 kB' 'MemAvailable: 75092524 kB' 'Buffers: 2724 kB' 'Cached: 14764164 kB' 'SwapCached: 0 kB' 'Active: 11675416 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221688 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614104 kB' 'Mapped: 217280 kB' 'Shmem: 10610872 kB' 'KReclaimable: 550140 kB' 'Slab: 1238172 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688032 kB' 'KernelStack: 23040 kB' 'PageTables: 9864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12722680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220892 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.739 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.740 nr_hugepages=1024 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.740 resv_hugepages=0 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.740 surplus_hugepages=0 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.740 anon_hugepages=0 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71222372 kB' 'MemAvailable: 75092568 kB' 'Buffers: 2724 kB' 'Cached: 14764188 kB' 'SwapCached: 0 kB' 'Active: 11674892 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221164 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613604 kB' 'Mapped: 217280 kB' 'Shmem: 10610896 kB' 'KReclaimable: 550140 kB' 'Slab: 1238172 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688032 kB' 'KernelStack: 22688 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12719860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220796 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.740 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.741 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.742 11:50:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38493112 kB' 'MemUsed: 9575284 kB' 'SwapCached: 0 kB' 'Active: 6315904 kB' 'Inactive: 438924 kB' 'Active(anon): 6007540 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6396356 kB' 'Mapped: 53672 kB' 'AnonPages: 361668 kB' 'Shmem: 5649068 kB' 'KernelStack: 13832 kB' 'PageTables: 5700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 677992 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 332124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.742 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.743 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.004 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 32725940 kB' 'MemUsed: 11492264 kB' 'SwapCached: 0 kB' 'Active: 5362864 kB' 'Inactive: 3263364 kB' 'Active(anon): 5217500 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8370600 kB' 'Mapped: 164088 kB' 'AnonPages: 255756 kB' 'Shmem: 4961872 kB' 'KernelStack: 8952 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204272 kB' 'Slab: 560324 kB' 'SReclaimable: 204272 kB' 'SUnreclaim: 356052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.005 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.007 node0=512 expecting 512 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:04.007 node1=512 expecting 512 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:04.007 00:04:04.007 real 0m3.087s 00:04:04.007 user 0m1.269s 00:04:04.007 sys 0m1.858s 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.007 11:50:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.007 ************************************ 00:04:04.007 END TEST per_node_1G_alloc 00:04:04.007 ************************************ 00:04:04.007 11:50:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:04.007 11:50:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.007 11:50:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.007 11:50:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.007 ************************************ 00:04:04.007 START TEST even_2G_alloc 00:04:04.007 ************************************ 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.007 11:50:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.307 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.307 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.307 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71211648 kB' 'MemAvailable: 75081844 kB' 'Buffers: 2724 kB' 'Cached: 14764296 kB' 'SwapCached: 0 kB' 'Active: 11673016 kB' 'Inactive: 3702288 kB' 'Active(anon): 11219288 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611600 kB' 'Mapped: 216164 kB' 'Shmem: 10611004 kB' 'KReclaimable: 550140 kB' 'Slab: 1238032 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687892 kB' 'KernelStack: 22656 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12710476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220732 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.307 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.308 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71211732 kB' 'MemAvailable: 75081928 kB' 'Buffers: 2724 kB' 'Cached: 14764300 kB' 'SwapCached: 0 kB' 'Active: 11672540 kB' 'Inactive: 3702288 kB' 'Active(anon): 11218812 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611164 kB' 'Mapped: 216220 kB' 'Shmem: 10611008 kB' 'KReclaimable: 550140 kB' 'Slab: 1238068 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687928 kB' 'KernelStack: 22720 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12710624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220716 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.309 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.310 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71212332 kB' 'MemAvailable: 75082528 kB' 'Buffers: 2724 kB' 'Cached: 14764300 kB' 'SwapCached: 0 kB' 'Active: 11672240 kB' 'Inactive: 3702288 kB' 'Active(anon): 11218512 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610848 kB' 'Mapped: 216220 kB' 'Shmem: 10611008 kB' 'KReclaimable: 550140 kB' 'Slab: 1238068 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687928 kB' 'KernelStack: 22720 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12710648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220716 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.311 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.312 nr_hugepages=1024 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.312 resv_hugepages=0 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.312 surplus_hugepages=0 00:04:07.312 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.313 anon_hugepages=0 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71211828 kB' 'MemAvailable: 75082024 kB' 'Buffers: 2724 kB' 'Cached: 14764356 kB' 'SwapCached: 0 kB' 'Active: 11672252 kB' 'Inactive: 3702288 kB' 'Active(anon): 11218524 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610756 kB' 'Mapped: 216220 kB' 'Shmem: 10611064 kB' 'KReclaimable: 550140 kB' 'Slab: 1238068 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687928 kB' 'KernelStack: 22704 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12710676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220716 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.313 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.314 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38504528 kB' 'MemUsed: 9563868 kB' 'SwapCached: 0 kB' 'Active: 6315512 kB' 'Inactive: 438924 kB' 'Active(anon): 6007148 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6396384 kB' 'Mapped: 52188 kB' 'AnonPages: 361276 kB' 'Shmem: 5649096 kB' 'KernelStack: 13848 kB' 'PageTables: 5712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 677768 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 331900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.315 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 32707072 kB' 'MemUsed: 11511132 kB' 'SwapCached: 0 kB' 'Active: 5357440 kB' 'Inactive: 3263364 kB' 'Active(anon): 5212076 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8370732 kB' 'Mapped: 164032 kB' 'AnonPages: 250240 kB' 'Shmem: 4962004 kB' 'KernelStack: 8888 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204272 kB' 'Slab: 560300 kB' 'SReclaimable: 204272 kB' 'SUnreclaim: 356028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.316 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.317 node0=512 expecting 512 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:07.317 node1=512 expecting 512 00:04:07.317 11:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:07.317 00:04:07.318 real 0m3.164s 00:04:07.318 user 0m1.288s 00:04:07.318 sys 0m1.915s 00:04:07.318 11:50:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.318 11:50:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.318 ************************************ 00:04:07.318 END TEST even_2G_alloc 00:04:07.318 ************************************ 00:04:07.318 11:50:44 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:07.318 11:50:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.318 11:50:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.318 11:50:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.318 ************************************ 00:04:07.318 START TEST odd_alloc 00:04:07.318 ************************************ 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.318 11:50:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.854 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.854 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.854 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71234780 kB' 'MemAvailable: 75104976 kB' 'Buffers: 2724 kB' 'Cached: 14764464 kB' 'SwapCached: 0 kB' 'Active: 11673356 kB' 'Inactive: 3702288 kB' 'Active(anon): 11219628 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611588 kB' 'Mapped: 216240 kB' 'Shmem: 10611172 kB' 'KReclaimable: 550140 kB' 'Slab: 1237820 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687680 kB' 'KernelStack: 22736 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12711644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220812 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.118 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.119 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71235824 kB' 'MemAvailable: 75106020 kB' 'Buffers: 2724 kB' 'Cached: 14764468 kB' 'SwapCached: 0 kB' 'Active: 11673564 kB' 'Inactive: 3702288 kB' 'Active(anon): 11219836 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611848 kB' 'Mapped: 216236 kB' 'Shmem: 10611176 kB' 'KReclaimable: 550140 kB' 'Slab: 1237828 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687688 kB' 'KernelStack: 22752 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12711664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220780 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.120 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.121 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71237492 kB' 'MemAvailable: 75107688 kB' 'Buffers: 2724 kB' 'Cached: 14764484 kB' 'SwapCached: 0 kB' 'Active: 11673584 kB' 'Inactive: 3702288 kB' 'Active(anon): 11219856 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611852 kB' 'Mapped: 216236 kB' 'Shmem: 10611192 kB' 'KReclaimable: 550140 kB' 'Slab: 1237828 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687688 kB' 'KernelStack: 22752 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12711684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220780 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.122 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.123 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:10.124 nr_hugepages=1025 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.124 resv_hugepages=0 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.124 surplus_hugepages=0 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.124 anon_hugepages=0 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71237256 kB' 'MemAvailable: 75107452 kB' 'Buffers: 2724 kB' 'Cached: 14764504 kB' 'SwapCached: 0 kB' 'Active: 11673572 kB' 'Inactive: 3702288 kB' 'Active(anon): 11219844 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611852 kB' 'Mapped: 216236 kB' 'Shmem: 10611212 kB' 'KReclaimable: 550140 kB' 'Slab: 1237828 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687688 kB' 'KernelStack: 22752 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12711704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220748 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.124 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.125 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38518020 kB' 'MemUsed: 9550376 kB' 'SwapCached: 0 kB' 'Active: 6315784 kB' 'Inactive: 438924 kB' 'Active(anon): 6007420 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6396416 kB' 'Mapped: 52188 kB' 'AnonPages: 361428 kB' 'Shmem: 5649128 kB' 'KernelStack: 13864 kB' 'PageTables: 5700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 677960 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 332092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.126 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.387 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.388 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 32719736 kB' 'MemUsed: 11498468 kB' 'SwapCached: 0 kB' 'Active: 5357844 kB' 'Inactive: 3263364 kB' 'Active(anon): 5212480 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8370856 kB' 'Mapped: 164048 kB' 'AnonPages: 250420 kB' 'Shmem: 4962128 kB' 'KernelStack: 8888 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204272 kB' 'Slab: 559868 kB' 'SReclaimable: 204272 kB' 'SUnreclaim: 355596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.389 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:10.390 node0=512 expecting 513 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:10.390 node1=513 expecting 512 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:10.390 00:04:10.390 real 0m3.094s 00:04:10.390 user 0m1.273s 00:04:10.390 sys 0m1.863s 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.390 11:50:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.390 ************************************ 00:04:10.390 END TEST odd_alloc 00:04:10.390 ************************************ 00:04:10.390 11:50:47 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:10.390 11:50:47 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.390 11:50:47 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.390 11:50:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.390 ************************************ 00:04:10.390 START TEST custom_alloc 00:04:10.390 ************************************ 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.390 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.391 11:50:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.689 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.689 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.689 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70182460 kB' 'MemAvailable: 74052656 kB' 'Buffers: 2724 kB' 'Cached: 14764628 kB' 'SwapCached: 0 kB' 'Active: 11675248 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221520 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613108 kB' 'Mapped: 216328 kB' 'Shmem: 10611336 kB' 'KReclaimable: 550140 kB' 'Slab: 1237496 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687356 kB' 'KernelStack: 22736 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12712624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220748 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.689 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.690 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70182560 kB' 'MemAvailable: 74052756 kB' 'Buffers: 2724 kB' 'Cached: 14764632 kB' 'SwapCached: 0 kB' 'Active: 11675100 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221372 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613484 kB' 'Mapped: 216248 kB' 'Shmem: 10611340 kB' 'KReclaimable: 550140 kB' 'Slab: 1237488 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687348 kB' 'KernelStack: 22752 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12712644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220748 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.691 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.692 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70182836 kB' 'MemAvailable: 74053032 kB' 'Buffers: 2724 kB' 'Cached: 14764648 kB' 'SwapCached: 0 kB' 'Active: 11675124 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221396 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613492 kB' 'Mapped: 216248 kB' 'Shmem: 10611356 kB' 'KReclaimable: 550140 kB' 'Slab: 1237488 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687348 kB' 'KernelStack: 22752 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12712664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220748 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.693 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.694 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:13.695 nr_hugepages=1536 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.695 resv_hugepages=0 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.695 surplus_hugepages=0 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.695 anon_hugepages=0 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70183104 kB' 'MemAvailable: 74053300 kB' 'Buffers: 2724 kB' 'Cached: 14764688 kB' 'SwapCached: 0 kB' 'Active: 11674800 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221072 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613088 kB' 'Mapped: 216248 kB' 'Shmem: 10611396 kB' 'KReclaimable: 550140 kB' 'Slab: 1237488 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687348 kB' 'KernelStack: 22736 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12712684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220748 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.695 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.696 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38513556 kB' 'MemUsed: 9554840 kB' 'SwapCached: 0 kB' 'Active: 6316936 kB' 'Inactive: 438924 kB' 'Active(anon): 6008572 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6396460 kB' 'Mapped: 52188 kB' 'AnonPages: 362736 kB' 'Shmem: 5649172 kB' 'KernelStack: 13880 kB' 'PageTables: 5752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 677544 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 331676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.697 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.698 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 31674236 kB' 'MemUsed: 12543968 kB' 'SwapCached: 0 kB' 'Active: 5358212 kB' 'Inactive: 3263364 kB' 'Active(anon): 5212848 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8370972 kB' 'Mapped: 164000 kB' 'AnonPages: 250704 kB' 'Shmem: 4962244 kB' 'KernelStack: 8856 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204272 kB' 'Slab: 559944 kB' 'SReclaimable: 204272 kB' 'SUnreclaim: 355672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.699 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.700 node0=512 expecting 512 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:13.700 node1=1024 expecting 1024 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:13.700 00:04:13.700 real 0m3.088s 00:04:13.700 user 0m1.251s 00:04:13.700 sys 0m1.867s 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.700 11:50:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.700 ************************************ 00:04:13.700 END TEST custom_alloc 00:04:13.700 ************************************ 00:04:13.700 11:50:50 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.700 11:50:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.700 11:50:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.700 11:50:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.700 ************************************ 00:04:13.700 START TEST no_shrink_alloc 00:04:13.700 ************************************ 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.700 11:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.273 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.273 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.273 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71169812 kB' 'MemAvailable: 75040008 kB' 'Buffers: 2724 kB' 'Cached: 14764772 kB' 'SwapCached: 0 kB' 'Active: 11676384 kB' 'Inactive: 3702288 kB' 'Active(anon): 11222656 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614472 kB' 'Mapped: 216272 kB' 'Shmem: 10611480 kB' 'KReclaimable: 550140 kB' 'Slab: 1237576 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687436 kB' 'KernelStack: 22752 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12713316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220812 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.538 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.539 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71166424 kB' 'MemAvailable: 75036620 kB' 'Buffers: 2724 kB' 'Cached: 14764772 kB' 'SwapCached: 0 kB' 'Active: 11676740 kB' 'Inactive: 3702288 kB' 'Active(anon): 11223012 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613888 kB' 'Mapped: 216256 kB' 'Shmem: 10611480 kB' 'KReclaimable: 550140 kB' 'Slab: 1237616 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687476 kB' 'KernelStack: 22768 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12728416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220780 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.540 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.541 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.542 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71166468 kB' 'MemAvailable: 75036664 kB' 'Buffers: 2724 kB' 'Cached: 14764796 kB' 'SwapCached: 0 kB' 'Active: 11675740 kB' 'Inactive: 3702288 kB' 'Active(anon): 11222012 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613988 kB' 'Mapped: 216256 kB' 'Shmem: 10611504 kB' 'KReclaimable: 550140 kB' 'Slab: 1237616 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687476 kB' 'KernelStack: 22720 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12712992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220732 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.543 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.544 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.545 nr_hugepages=1024 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.545 resv_hugepages=0 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.545 surplus_hugepages=0 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.545 anon_hugepages=0 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.545 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71166744 kB' 'MemAvailable: 75036940 kB' 'Buffers: 2724 kB' 'Cached: 14764836 kB' 'SwapCached: 0 kB' 'Active: 11675408 kB' 'Inactive: 3702288 kB' 'Active(anon): 11221680 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613544 kB' 'Mapped: 216256 kB' 'Shmem: 10611544 kB' 'KReclaimable: 550140 kB' 'Slab: 1237616 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 687476 kB' 'KernelStack: 22720 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12713140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220732 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.546 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.547 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 37447172 kB' 'MemUsed: 10621224 kB' 'SwapCached: 0 kB' 'Active: 6317132 kB' 'Inactive: 438924 kB' 'Active(anon): 6008768 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6396496 kB' 'Mapped: 52188 kB' 'AnonPages: 362840 kB' 'Shmem: 5649208 kB' 'KernelStack: 13864 kB' 'PageTables: 5660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 677852 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 331984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.548 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.549 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.550 node0=1024 expecting 1024 00:04:16.550 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.550 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:16.550 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:16.550 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:16.550 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.550 11:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.849 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.849 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.849 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.849 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71172876 kB' 'MemAvailable: 75043072 kB' 'Buffers: 2724 kB' 'Cached: 14764916 kB' 'SwapCached: 0 kB' 'Active: 11676864 kB' 'Inactive: 3702288 kB' 'Active(anon): 11223136 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614808 kB' 'Mapped: 216388 kB' 'Shmem: 10611624 kB' 'KReclaimable: 550140 kB' 'Slab: 1238224 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688084 kB' 'KernelStack: 22784 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12714180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220812 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.849 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.850 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71173916 kB' 'MemAvailable: 75044112 kB' 'Buffers: 2724 kB' 'Cached: 14764920 kB' 'SwapCached: 0 kB' 'Active: 11676564 kB' 'Inactive: 3702288 kB' 'Active(anon): 11222836 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614584 kB' 'Mapped: 216324 kB' 'Shmem: 10611628 kB' 'KReclaimable: 550140 kB' 'Slab: 1238336 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688196 kB' 'KernelStack: 22800 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12714444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220780 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.851 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.852 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71174552 kB' 'MemAvailable: 75044748 kB' 'Buffers: 2724 kB' 'Cached: 14764940 kB' 'SwapCached: 0 kB' 'Active: 11676728 kB' 'Inactive: 3702288 kB' 'Active(anon): 11223000 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614720 kB' 'Mapped: 216264 kB' 'Shmem: 10611648 kB' 'KReclaimable: 550140 kB' 'Slab: 1238336 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688196 kB' 'KernelStack: 22800 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12714220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220748 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.853 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.854 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.855 nr_hugepages=1024 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.855 resv_hugepages=0 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.855 surplus_hugepages=0 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.855 anon_hugepages=0 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71174924 kB' 'MemAvailable: 75045120 kB' 'Buffers: 2724 kB' 'Cached: 14764980 kB' 'SwapCached: 0 kB' 'Active: 11676292 kB' 'Inactive: 3702288 kB' 'Active(anon): 11222564 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614176 kB' 'Mapped: 216324 kB' 'Shmem: 10611688 kB' 'KReclaimable: 550140 kB' 'Slab: 1238336 kB' 'SReclaimable: 550140 kB' 'SUnreclaim: 688196 kB' 'KernelStack: 22784 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12714240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220748 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4516820 kB' 'DirectMap2M: 58077184 kB' 'DirectMap1G: 38797312 kB' 00:04:19.855 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.856 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.857 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 37446184 kB' 'MemUsed: 10622212 kB' 'SwapCached: 0 kB' 'Active: 6318100 kB' 'Inactive: 438924 kB' 'Active(anon): 6009736 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6396532 kB' 'Mapped: 52188 kB' 'AnonPages: 363780 kB' 'Shmem: 5649244 kB' 'KernelStack: 13944 kB' 'PageTables: 5932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 678512 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 332644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.858 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.859 node0=1024 expecting 1024 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.859 00:04:19.859 real 0m6.144s 00:04:19.859 user 0m2.465s 00:04:19.859 sys 0m3.752s 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.859 11:50:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.859 ************************************ 00:04:19.859 END TEST no_shrink_alloc 00:04:19.859 ************************************ 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:19.859 11:50:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:19.859 00:04:19.859 real 0m23.311s 00:04:19.859 user 0m9.128s 00:04:19.859 sys 0m13.667s 00:04:19.859 11:50:56 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.859 11:50:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.859 ************************************ 00:04:19.859 END TEST hugepages 00:04:19.859 ************************************ 00:04:19.859 11:50:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:19.859 11:50:56 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.859 11:50:56 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.859 11:50:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.859 ************************************ 00:04:19.859 START TEST driver 00:04:19.859 ************************************ 00:04:19.859 11:50:56 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:19.859 * Looking for test storage... 00:04:19.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.859 11:50:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:19.859 11:50:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.859 11:50:57 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.056 11:51:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:24.056 11:51:01 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.056 11:51:01 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.056 11:51:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.056 ************************************ 00:04:24.056 START TEST guess_driver 00:04:24.056 ************************************ 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:24.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:24.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:24.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:24.056 Looking for driver=vfio-pci 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.056 11:51:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.405 11:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.973 11:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.973 11:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:27.973 11:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.232 11:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:28.232 11:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:28.232 11:51:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.232 11:51:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.459 00:04:32.459 real 0m8.248s 00:04:32.459 user 0m2.355s 00:04:32.459 sys 0m4.291s 00:04:32.459 11:51:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.459 11:51:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.459 ************************************ 00:04:32.459 END TEST guess_driver 00:04:32.459 ************************************ 00:04:32.459 00:04:32.459 real 0m12.574s 00:04:32.459 user 0m3.549s 00:04:32.459 sys 0m6.581s 00:04:32.459 11:51:09 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.459 11:51:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.459 ************************************ 00:04:32.459 END TEST driver 00:04:32.459 ************************************ 00:04:32.459 11:51:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.459 11:51:09 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.459 11:51:09 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.459 11:51:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.459 ************************************ 00:04:32.459 START TEST devices 00:04:32.459 ************************************ 00:04:32.459 11:51:09 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.459 * Looking for test storage... 00:04:32.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.459 11:51:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:32.459 11:51:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:32.459 11:51:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.459 11:51:09 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:35.750 11:51:12 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:35.750 No valid GPT data, bailing 00:04:35.750 11:51:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.750 11:51:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:35.750 11:51:12 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.750 11:51:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.750 11:51:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.750 ************************************ 00:04:35.750 START TEST nvme_mount 00:04:35.750 ************************************ 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.750 11:51:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:37.129 Creating new GPT entries in memory. 00:04:37.129 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:37.129 other utilities. 00:04:37.129 11:51:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:37.129 11:51:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.129 11:51:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.129 11:51:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.129 11:51:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:38.068 Creating new GPT entries in memory. 00:04:38.068 The operation has completed successfully. 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3915507 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.068 11:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:40.606 11:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:40.866 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.866 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.126 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:41.126 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:41.126 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:41.126 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.126 11:51:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.434 11:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.973 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.973 00:04:46.973 real 0m11.182s 00:04:46.973 user 0m3.279s 00:04:46.973 sys 0m5.716s 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.973 11:51:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.973 ************************************ 00:04:46.973 END TEST nvme_mount 00:04:46.973 ************************************ 00:04:46.973 11:51:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:46.973 11:51:24 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.973 11:51:24 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.973 11:51:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.973 ************************************ 00:04:46.973 START TEST dm_mount 00:04:46.973 ************************************ 00:04:46.973 11:51:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:46.973 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:46.973 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.974 11:51:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:48.406 Creating new GPT entries in memory. 00:04:48.406 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.406 other utilities. 00:04:48.406 11:51:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.406 11:51:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.406 11:51:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.406 11:51:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.406 11:51:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.344 Creating new GPT entries in memory. 00:04:49.344 The operation has completed successfully. 00:04:49.344 11:51:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.344 11:51:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.344 11:51:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.344 11:51:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.344 11:51:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:50.283 The operation has completed successfully. 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3919712 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.283 11:51:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.576 11:51:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:56.112 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:56.112 00:04:56.112 real 0m9.083s 00:04:56.112 user 0m2.258s 00:04:56.112 sys 0m3.825s 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.112 11:51:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:56.112 ************************************ 00:04:56.112 END TEST dm_mount 00:04:56.112 ************************************ 00:04:56.112 11:51:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:56.112 11:51:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:56.112 11:51:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.112 11:51:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.112 11:51:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:56.112 11:51:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.112 11:51:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.371 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:56.371 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:56.371 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:56.371 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:56.371 11:51:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:56.371 11:51:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:56.371 11:51:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:56.371 11:51:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.371 11:51:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:56.371 11:51:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.371 11:51:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:56.630 00:04:56.630 real 0m24.064s 00:04:56.630 user 0m6.858s 00:04:56.630 sys 0m11.899s 00:04:56.630 11:51:33 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.630 11:51:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:56.630 ************************************ 00:04:56.630 END TEST devices 00:04:56.630 ************************************ 00:04:56.630 00:04:56.630 real 1m20.853s 00:04:56.630 user 0m26.299s 00:04:56.630 sys 0m44.868s 00:04:56.630 11:51:33 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.630 11:51:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.630 ************************************ 00:04:56.630 END TEST setup.sh 00:04:56.630 ************************************ 00:04:56.630 11:51:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:59.168 Hugepages 00:04:59.168 node hugesize free / total 00:04:59.168 node0 1048576kB 0 / 0 00:04:59.168 node0 2048kB 2048 / 2048 00:04:59.427 node1 1048576kB 0 / 0 00:04:59.427 node1 2048kB 0 / 0 00:04:59.427 00:04:59.427 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.427 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:59.427 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:59.427 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:59.427 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:59.428 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:59.428 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:59.428 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:59.428 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:59.428 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:59.428 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:59.428 11:51:36 -- spdk/autotest.sh@130 -- # uname -s 00:04:59.428 11:51:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:59.428 11:51:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:59.428 11:51:36 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.012 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:02.012 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:02.012 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:02.012 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:02.271 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:03.206 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:03.206 11:51:40 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:04.144 11:51:41 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:04.144 11:51:41 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:04.144 11:51:41 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.144 11:51:41 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:04.144 11:51:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:04.144 11:51:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:04.144 11:51:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.144 11:51:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:04.144 11:51:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:04.403 11:51:41 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:04.403 11:51:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:05:04.403 11:51:41 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.935 Waiting for block devices as requested 00:05:07.194 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:05:07.194 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:07.453 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:07.453 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:07.453 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:07.453 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:07.712 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:07.712 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:07.712 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:07.971 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:07.971 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:07.971 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:08.230 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:08.230 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:08.230 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:08.230 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:08.489 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:08.489 11:51:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:08.489 11:51:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1502 -- # grep 0000:86:00.0/nvme/nvme 00:05:08.489 11:51:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:05:08.489 11:51:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:08.489 11:51:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:08.489 11:51:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:08.489 11:51:45 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:08.489 11:51:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:08.489 11:51:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:08.489 11:51:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:08.489 11:51:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:08.489 11:51:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:08.489 11:51:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:08.489 11:51:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:08.489 11:51:45 -- common/autotest_common.sh@1557 -- # continue 00:05:08.489 11:51:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:08.489 11:51:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.489 11:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.489 11:51:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:08.489 11:51:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.489 11:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.489 11:51:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.782 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.782 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:12.350 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.351 11:51:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:12.351 11:51:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.351 11:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.351 11:51:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:12.351 11:51:49 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:12.351 11:51:49 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.351 11:51:49 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:12.351 11:51:49 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:12.351 11:51:49 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:12.351 11:51:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:12.351 11:51:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:12.351 11:51:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.351 11:51:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.351 11:51:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:12.610 11:51:49 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:12.610 11:51:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:05:12.610 11:51:49 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:12.610 11:51:49 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:05:12.610 11:51:49 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:12.610 11:51:49 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:12.610 11:51:49 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:12.610 11:51:49 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:86:00.0 00:05:12.610 11:51:49 -- common/autotest_common.sh@1592 -- # [[ -z 0000:86:00.0 ]] 00:05:12.610 11:51:49 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3929044 00:05:12.610 11:51:49 -- common/autotest_common.sh@1598 -- # waitforlisten 3929044 00:05:12.610 11:51:49 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.610 11:51:49 -- common/autotest_common.sh@831 -- # '[' -z 3929044 ']' 00:05:12.610 11:51:49 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.610 11:51:49 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.610 11:51:49 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.610 11:51:49 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.610 11:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.610 [2024-07-25 11:51:49.756673] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:12.610 [2024-07-25 11:51:49.756737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929044 ] 00:05:12.610 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.610 [2024-07-25 11:51:49.841057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.869 [2024-07-25 11:51:49.929341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.436 11:51:50 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.436 11:51:50 -- common/autotest_common.sh@864 -- # return 0 00:05:13.436 11:51:50 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:13.436 11:51:50 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:13.436 11:51:50 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:05:16.767 nvme0n1 00:05:16.767 11:51:53 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:16.767 [2024-07-25 11:51:54.009063] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:16.767 request: 00:05:16.767 { 00:05:16.767 "nvme_ctrlr_name": "nvme0", 00:05:16.767 "password": "test", 00:05:16.767 "method": "bdev_nvme_opal_revert", 00:05:16.767 "req_id": 1 00:05:16.767 } 00:05:16.767 Got JSON-RPC error response 00:05:16.767 response: 00:05:16.767 { 00:05:16.767 "code": -32602, 00:05:16.767 "message": "Invalid parameters" 00:05:16.767 } 00:05:16.767 11:51:54 -- common/autotest_common.sh@1604 -- # true 00:05:16.767 11:51:54 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:16.767 11:51:54 -- common/autotest_common.sh@1608 -- # killprocess 3929044 00:05:16.767 11:51:54 -- common/autotest_common.sh@950 -- # '[' -z 3929044 ']' 00:05:16.767 11:51:54 -- common/autotest_common.sh@954 -- # kill -0 3929044 00:05:16.767 11:51:54 -- common/autotest_common.sh@955 -- # uname 00:05:16.767 11:51:54 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.767 11:51:54 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3929044 00:05:17.025 11:51:54 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.025 11:51:54 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.025 11:51:54 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3929044' 00:05:17.025 killing process with pid 3929044 00:05:17.025 11:51:54 -- common/autotest_common.sh@969 -- # kill 3929044 00:05:17.025 11:51:54 -- common/autotest_common.sh@974 -- # wait 3929044 00:05:18.931 11:51:55 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:18.932 11:51:55 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:18.932 11:51:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:18.932 11:51:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:18.932 11:51:55 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:18.932 11:51:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.932 11:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:18.932 11:51:55 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:18.932 11:51:55 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:18.932 11:51:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.932 11:51:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.932 11:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:18.932 ************************************ 00:05:18.932 START TEST env 00:05:18.932 ************************************ 00:05:18.932 11:51:55 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:18.932 * Looking for test storage... 00:05:18.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:18.932 11:51:55 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.932 11:51:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.932 11:51:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.932 11:51:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.932 ************************************ 00:05:18.932 START TEST env_memory 00:05:18.932 ************************************ 00:05:18.932 11:51:55 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.932 00:05:18.932 00:05:18.932 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.932 http://cunit.sourceforge.net/ 00:05:18.932 00:05:18.932 00:05:18.932 Suite: memory 00:05:18.932 Test: alloc and free memory map ...[2024-07-25 11:51:55.972532] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:18.932 passed 00:05:18.932 Test: mem map translation ...[2024-07-25 11:51:56.001606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:18.932 [2024-07-25 11:51:56.001625] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:18.932 [2024-07-25 11:51:56.001680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:18.932 [2024-07-25 11:51:56.001689] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:18.932 passed 00:05:18.932 Test: mem map registration ...[2024-07-25 11:51:56.061415] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:18.932 [2024-07-25 11:51:56.061432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:18.932 passed 00:05:18.932 Test: mem map adjacent registrations ...passed 00:05:18.932 00:05:18.932 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.932 suites 1 1 n/a 0 0 00:05:18.932 tests 4 4 4 0 0 00:05:18.932 asserts 152 152 152 0 n/a 00:05:18.932 00:05:18.932 Elapsed time = 0.205 seconds 00:05:18.932 00:05:18.932 real 0m0.217s 00:05:18.932 user 0m0.205s 00:05:18.932 sys 0m0.012s 00:05:18.932 11:51:56 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.932 11:51:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:18.932 ************************************ 00:05:18.932 END TEST env_memory 00:05:18.932 ************************************ 00:05:18.932 11:51:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.932 11:51:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.932 11:51:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.932 11:51:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.932 ************************************ 00:05:18.932 START TEST env_vtophys 00:05:18.932 ************************************ 00:05:18.932 11:51:56 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.932 EAL: lib.eal log level changed from notice to debug 00:05:18.932 EAL: Detected lcore 0 as core 0 on socket 0 00:05:18.932 EAL: Detected lcore 1 as core 1 on socket 0 00:05:18.932 EAL: Detected lcore 2 as core 2 on socket 0 00:05:18.932 EAL: Detected lcore 3 as core 3 on socket 0 00:05:18.932 EAL: Detected lcore 4 as core 4 on socket 0 00:05:18.932 EAL: Detected lcore 5 as core 5 on socket 0 00:05:18.932 EAL: Detected lcore 6 as core 6 on socket 0 00:05:18.932 EAL: Detected lcore 7 as core 8 on socket 0 00:05:18.932 EAL: Detected lcore 8 as core 9 on socket 0 00:05:18.932 EAL: Detected lcore 9 as core 10 on socket 0 00:05:18.932 EAL: Detected lcore 10 as core 11 on socket 0 00:05:18.932 EAL: Detected lcore 11 as core 12 on socket 0 00:05:18.932 EAL: Detected lcore 12 as core 13 on socket 0 00:05:18.932 EAL: Detected lcore 13 as core 14 on socket 0 00:05:18.932 EAL: Detected lcore 14 as core 16 on socket 0 00:05:18.932 EAL: Detected lcore 15 as core 17 on socket 0 00:05:18.932 EAL: Detected lcore 16 as core 18 on socket 0 00:05:18.932 EAL: Detected lcore 17 as core 19 on socket 0 00:05:18.932 EAL: Detected lcore 18 as core 20 on socket 0 00:05:18.932 EAL: Detected lcore 19 as core 21 on socket 0 00:05:18.932 EAL: Detected lcore 20 as core 22 on socket 0 00:05:18.932 EAL: Detected lcore 21 as core 24 on socket 0 00:05:18.932 EAL: Detected lcore 22 as core 25 on socket 0 00:05:18.932 EAL: Detected lcore 23 as core 26 on socket 0 00:05:18.932 EAL: Detected lcore 24 as core 27 on socket 0 00:05:18.932 EAL: Detected lcore 25 as core 28 on socket 0 00:05:18.932 EAL: Detected lcore 26 as core 29 on socket 0 00:05:18.932 EAL: Detected lcore 27 as core 30 on socket 0 00:05:18.932 EAL: Detected lcore 28 as core 0 on socket 1 00:05:18.932 EAL: Detected lcore 29 as core 1 on socket 1 00:05:18.932 EAL: Detected lcore 30 as core 2 on socket 1 00:05:18.932 EAL: Detected lcore 31 as core 3 on socket 1 00:05:18.932 EAL: Detected lcore 32 as core 4 on socket 1 00:05:18.932 EAL: Detected lcore 33 as core 5 on socket 1 00:05:18.932 EAL: Detected lcore 34 as core 6 on socket 1 00:05:18.932 EAL: Detected lcore 35 as core 8 on socket 1 00:05:18.932 EAL: Detected lcore 36 as core 9 on socket 1 00:05:18.932 EAL: Detected lcore 37 as core 10 on socket 1 00:05:18.932 EAL: Detected lcore 38 as core 11 on socket 1 00:05:18.932 EAL: Detected lcore 39 as core 12 on socket 1 00:05:18.932 EAL: Detected lcore 40 as core 13 on socket 1 00:05:18.932 EAL: Detected lcore 41 as core 14 on socket 1 00:05:18.932 EAL: Detected lcore 42 as core 16 on socket 1 00:05:18.932 EAL: Detected lcore 43 as core 17 on socket 1 00:05:18.932 EAL: Detected lcore 44 as core 18 on socket 1 00:05:18.932 EAL: Detected lcore 45 as core 19 on socket 1 00:05:18.932 EAL: Detected lcore 46 as core 20 on socket 1 00:05:18.932 EAL: Detected lcore 47 as core 21 on socket 1 00:05:18.932 EAL: Detected lcore 48 as core 22 on socket 1 00:05:18.932 EAL: Detected lcore 49 as core 24 on socket 1 00:05:18.932 EAL: Detected lcore 50 as core 25 on socket 1 00:05:18.932 EAL: Detected lcore 51 as core 26 on socket 1 00:05:18.932 EAL: Detected lcore 52 as core 27 on socket 1 00:05:18.932 EAL: Detected lcore 53 as core 28 on socket 1 00:05:18.932 EAL: Detected lcore 54 as core 29 on socket 1 00:05:18.932 EAL: Detected lcore 55 as core 30 on socket 1 00:05:18.932 EAL: Detected lcore 56 as core 0 on socket 0 00:05:18.932 EAL: Detected lcore 57 as core 1 on socket 0 00:05:18.932 EAL: Detected lcore 58 as core 2 on socket 0 00:05:18.932 EAL: Detected lcore 59 as core 3 on socket 0 00:05:18.932 EAL: Detected lcore 60 as core 4 on socket 0 00:05:18.932 EAL: Detected lcore 61 as core 5 on socket 0 00:05:18.932 EAL: Detected lcore 62 as core 6 on socket 0 00:05:18.932 EAL: Detected lcore 63 as core 8 on socket 0 00:05:18.932 EAL: Detected lcore 64 as core 9 on socket 0 00:05:18.932 EAL: Detected lcore 65 as core 10 on socket 0 00:05:18.932 EAL: Detected lcore 66 as core 11 on socket 0 00:05:18.932 EAL: Detected lcore 67 as core 12 on socket 0 00:05:18.932 EAL: Detected lcore 68 as core 13 on socket 0 00:05:18.932 EAL: Detected lcore 69 as core 14 on socket 0 00:05:18.932 EAL: Detected lcore 70 as core 16 on socket 0 00:05:18.932 EAL: Detected lcore 71 as core 17 on socket 0 00:05:18.932 EAL: Detected lcore 72 as core 18 on socket 0 00:05:18.932 EAL: Detected lcore 73 as core 19 on socket 0 00:05:18.932 EAL: Detected lcore 74 as core 20 on socket 0 00:05:18.932 EAL: Detected lcore 75 as core 21 on socket 0 00:05:18.932 EAL: Detected lcore 76 as core 22 on socket 0 00:05:18.932 EAL: Detected lcore 77 as core 24 on socket 0 00:05:18.932 EAL: Detected lcore 78 as core 25 on socket 0 00:05:18.932 EAL: Detected lcore 79 as core 26 on socket 0 00:05:18.932 EAL: Detected lcore 80 as core 27 on socket 0 00:05:18.932 EAL: Detected lcore 81 as core 28 on socket 0 00:05:18.932 EAL: Detected lcore 82 as core 29 on socket 0 00:05:18.932 EAL: Detected lcore 83 as core 30 on socket 0 00:05:18.932 EAL: Detected lcore 84 as core 0 on socket 1 00:05:18.932 EAL: Detected lcore 85 as core 1 on socket 1 00:05:18.932 EAL: Detected lcore 86 as core 2 on socket 1 00:05:18.932 EAL: Detected lcore 87 as core 3 on socket 1 00:05:18.932 EAL: Detected lcore 88 as core 4 on socket 1 00:05:18.932 EAL: Detected lcore 89 as core 5 on socket 1 00:05:18.932 EAL: Detected lcore 90 as core 6 on socket 1 00:05:18.932 EAL: Detected lcore 91 as core 8 on socket 1 00:05:18.932 EAL: Detected lcore 92 as core 9 on socket 1 00:05:18.932 EAL: Detected lcore 93 as core 10 on socket 1 00:05:18.932 EAL: Detected lcore 94 as core 11 on socket 1 00:05:18.932 EAL: Detected lcore 95 as core 12 on socket 1 00:05:18.932 EAL: Detected lcore 96 as core 13 on socket 1 00:05:18.932 EAL: Detected lcore 97 as core 14 on socket 1 00:05:18.932 EAL: Detected lcore 98 as core 16 on socket 1 00:05:18.932 EAL: Detected lcore 99 as core 17 on socket 1 00:05:18.932 EAL: Detected lcore 100 as core 18 on socket 1 00:05:18.932 EAL: Detected lcore 101 as core 19 on socket 1 00:05:18.933 EAL: Detected lcore 102 as core 20 on socket 1 00:05:18.933 EAL: Detected lcore 103 as core 21 on socket 1 00:05:18.933 EAL: Detected lcore 104 as core 22 on socket 1 00:05:18.933 EAL: Detected lcore 105 as core 24 on socket 1 00:05:18.933 EAL: Detected lcore 106 as core 25 on socket 1 00:05:18.933 EAL: Detected lcore 107 as core 26 on socket 1 00:05:18.933 EAL: Detected lcore 108 as core 27 on socket 1 00:05:18.933 EAL: Detected lcore 109 as core 28 on socket 1 00:05:18.933 EAL: Detected lcore 110 as core 29 on socket 1 00:05:18.933 EAL: Detected lcore 111 as core 30 on socket 1 00:05:18.933 EAL: Maximum logical cores by configuration: 128 00:05:18.933 EAL: Detected CPU lcores: 112 00:05:18.933 EAL: Detected NUMA nodes: 2 00:05:18.933 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:18.933 EAL: Detected shared linkage of DPDK 00:05:19.193 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.193 EAL: Bus pci wants IOVA as 'DC' 00:05:19.193 EAL: Buses did not request a specific IOVA mode. 00:05:19.193 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.193 EAL: Selected IOVA mode 'VA' 00:05:19.193 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.193 EAL: Probing VFIO support... 00:05:19.193 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.193 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.193 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.193 EAL: VFIO support initialized 00:05:19.193 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.193 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.193 EAL: Setting up physically contiguous memory... 00:05:19.193 EAL: Setting maximum number of open files to 524288 00:05:19.193 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.193 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.193 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.193 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.193 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.193 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.193 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.193 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.193 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.193 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.193 EAL: Hugepages will be freed exactly as allocated. 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: TSC frequency is ~2200000 KHz 00:05:19.193 EAL: Main lcore 0 is ready (tid=7fbc706dba00;cpuset=[0]) 00:05:19.193 EAL: Trying to obtain current memory policy. 00:05:19.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.193 EAL: Restoring previous memory policy: 0 00:05:19.193 EAL: request: mp_malloc_sync 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.193 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.193 00:05:19.193 00:05:19.193 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.193 http://cunit.sourceforge.net/ 00:05:19.193 00:05:19.193 00:05:19.193 Suite: components_suite 00:05:19.193 Test: vtophys_malloc_test ...passed 00:05:19.193 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.193 EAL: Restoring previous memory policy: 4 00:05:19.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.193 EAL: request: mp_malloc_sync 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.193 EAL: request: mp_malloc_sync 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.193 EAL: Trying to obtain current memory policy. 00:05:19.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.193 EAL: Restoring previous memory policy: 4 00:05:19.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.193 EAL: request: mp_malloc_sync 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.193 EAL: request: mp_malloc_sync 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.193 EAL: Trying to obtain current memory policy. 00:05:19.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.193 EAL: Restoring previous memory policy: 4 00:05:19.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.193 EAL: request: mp_malloc_sync 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.193 EAL: request: mp_malloc_sync 00:05:19.193 EAL: No shared files mode enabled, IPC is disabled 00:05:19.193 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.193 EAL: Trying to obtain current memory policy. 00:05:19.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.193 EAL: Restoring previous memory policy: 4 00:05:19.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.194 EAL: Trying to obtain current memory policy. 00:05:19.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.194 EAL: Restoring previous memory policy: 4 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.194 EAL: Trying to obtain current memory policy. 00:05:19.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.194 EAL: Restoring previous memory policy: 4 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.194 EAL: Trying to obtain current memory policy. 00:05:19.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.194 EAL: Restoring previous memory policy: 4 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.194 EAL: Trying to obtain current memory policy. 00:05:19.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.194 EAL: Restoring previous memory policy: 4 00:05:19.194 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.194 EAL: request: mp_malloc_sync 00:05:19.194 EAL: No shared files mode enabled, IPC is disabled 00:05:19.194 EAL: Heap on socket 0 was expanded by 258MB 00:05:19.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.453 EAL: request: mp_malloc_sync 00:05:19.453 EAL: No shared files mode enabled, IPC is disabled 00:05:19.453 EAL: Heap on socket 0 was shrunk by 258MB 00:05:19.453 EAL: Trying to obtain current memory policy. 00:05:19.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.453 EAL: Restoring previous memory policy: 4 00:05:19.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.453 EAL: request: mp_malloc_sync 00:05:19.453 EAL: No shared files mode enabled, IPC is disabled 00:05:19.453 EAL: Heap on socket 0 was expanded by 514MB 00:05:19.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.712 EAL: request: mp_malloc_sync 00:05:19.712 EAL: No shared files mode enabled, IPC is disabled 00:05:19.712 EAL: Heap on socket 0 was shrunk by 514MB 00:05:19.712 EAL: Trying to obtain current memory policy. 00:05:19.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.971 EAL: Restoring previous memory policy: 4 00:05:19.971 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.971 EAL: request: mp_malloc_sync 00:05:19.971 EAL: No shared files mode enabled, IPC is disabled 00:05:19.971 EAL: Heap on socket 0 was expanded by 1026MB 00:05:19.971 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.231 EAL: request: mp_malloc_sync 00:05:20.231 EAL: No shared files mode enabled, IPC is disabled 00:05:20.231 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.231 passed 00:05:20.231 00:05:20.231 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.231 suites 1 1 n/a 0 0 00:05:20.231 tests 2 2 2 0 0 00:05:20.231 asserts 497 497 497 0 n/a 00:05:20.231 00:05:20.231 Elapsed time = 1.015 seconds 00:05:20.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.231 EAL: request: mp_malloc_sync 00:05:20.231 EAL: No shared files mode enabled, IPC is disabled 00:05:20.231 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.231 EAL: No shared files mode enabled, IPC is disabled 00:05:20.231 EAL: No shared files mode enabled, IPC is disabled 00:05:20.231 EAL: No shared files mode enabled, IPC is disabled 00:05:20.231 00:05:20.231 real 0m1.158s 00:05:20.231 user 0m0.675s 00:05:20.231 sys 0m0.449s 00:05:20.231 11:51:57 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.231 11:51:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:20.231 ************************************ 00:05:20.231 END TEST env_vtophys 00:05:20.231 ************************************ 00:05:20.231 11:51:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.231 11:51:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.231 11:51:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.231 11:51:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.231 ************************************ 00:05:20.231 START TEST env_pci 00:05:20.231 ************************************ 00:05:20.231 11:51:57 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.231 00:05:20.231 00:05:20.231 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.231 http://cunit.sourceforge.net/ 00:05:20.231 00:05:20.231 00:05:20.231 Suite: pci 00:05:20.231 Test: pci_hook ...[2024-07-25 11:51:57.451030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3930546 has claimed it 00:05:20.231 EAL: Cannot find device (10000:00:01.0) 00:05:20.231 EAL: Failed to attach device on primary process 00:05:20.231 passed 00:05:20.231 00:05:20.231 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.231 suites 1 1 n/a 0 0 00:05:20.231 tests 1 1 1 0 0 00:05:20.231 asserts 25 25 25 0 n/a 00:05:20.231 00:05:20.231 Elapsed time = 0.029 seconds 00:05:20.231 00:05:20.231 real 0m0.049s 00:05:20.231 user 0m0.017s 00:05:20.231 sys 0m0.032s 00:05:20.231 11:51:57 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.231 11:51:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:20.231 ************************************ 00:05:20.231 END TEST env_pci 00:05:20.231 ************************************ 00:05:20.231 11:51:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:20.231 11:51:57 env -- env/env.sh@15 -- # uname 00:05:20.231 11:51:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:20.231 11:51:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:20.231 11:51:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.231 11:51:57 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:20.231 11:51:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.231 11:51:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.491 ************************************ 00:05:20.491 START TEST env_dpdk_post_init 00:05:20.491 ************************************ 00:05:20.491 11:51:57 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.491 EAL: Detected CPU lcores: 112 00:05:20.491 EAL: Detected NUMA nodes: 2 00:05:20.491 EAL: Detected shared linkage of DPDK 00:05:20.491 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.491 EAL: Selected IOVA mode 'VA' 00:05:20.491 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.491 EAL: VFIO support initialized 00:05:20.491 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.491 EAL: Using IOMMU type 1 (Type 1) 00:05:20.491 EAL: Ignore mapping IO port bar(1) 00:05:20.491 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:20.491 EAL: Ignore mapping IO port bar(1) 00:05:20.491 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:20.491 EAL: Ignore mapping IO port bar(1) 00:05:20.491 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:20.491 EAL: Ignore mapping IO port bar(1) 00:05:20.491 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:20.491 EAL: Ignore mapping IO port bar(1) 00:05:20.491 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:20.491 EAL: Ignore mapping IO port bar(1) 00:05:20.491 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:20.751 EAL: Ignore mapping IO port bar(1) 00:05:20.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:21.691 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:05:24.983 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:05:24.983 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:05:24.983 Starting DPDK initialization... 00:05:24.983 Starting SPDK post initialization... 00:05:24.983 SPDK NVMe probe 00:05:24.983 Attaching to 0000:86:00.0 00:05:24.983 Attached to 0000:86:00.0 00:05:24.983 Cleaning up... 00:05:24.983 00:05:24.983 real 0m4.457s 00:05:24.983 user 0m3.352s 00:05:24.983 sys 0m0.156s 00:05:24.983 11:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.983 11:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 ************************************ 00:05:24.983 END TEST env_dpdk_post_init 00:05:24.983 ************************************ 00:05:24.983 11:52:02 env -- env/env.sh@26 -- # uname 00:05:24.983 11:52:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.983 11:52:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.983 11:52:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.983 11:52:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.983 11:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 ************************************ 00:05:24.983 START TEST env_mem_callbacks 00:05:24.983 ************************************ 00:05:24.983 11:52:02 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.983 EAL: Detected CPU lcores: 112 00:05:24.983 EAL: Detected NUMA nodes: 2 00:05:24.983 EAL: Detected shared linkage of DPDK 00:05:24.983 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.983 EAL: Selected IOVA mode 'VA' 00:05:24.983 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.983 EAL: VFIO support initialized 00:05:24.983 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.983 00:05:24.983 00:05:24.983 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.983 http://cunit.sourceforge.net/ 00:05:24.983 00:05:24.983 00:05:24.983 Suite: memory 00:05:24.983 Test: test ... 00:05:24.983 register 0x200000200000 2097152 00:05:24.983 malloc 3145728 00:05:24.983 register 0x200000400000 4194304 00:05:24.983 buf 0x200000500000 len 3145728 PASSED 00:05:24.983 malloc 64 00:05:24.983 buf 0x2000004fff40 len 64 PASSED 00:05:24.983 malloc 4194304 00:05:24.983 register 0x200000800000 6291456 00:05:24.983 buf 0x200000a00000 len 4194304 PASSED 00:05:24.983 free 0x200000500000 3145728 00:05:24.983 free 0x2000004fff40 64 00:05:24.983 unregister 0x200000400000 4194304 PASSED 00:05:24.983 free 0x200000a00000 4194304 00:05:24.983 unregister 0x200000800000 6291456 PASSED 00:05:24.983 malloc 8388608 00:05:24.983 register 0x200000400000 10485760 00:05:24.983 buf 0x200000600000 len 8388608 PASSED 00:05:24.983 free 0x200000600000 8388608 00:05:24.983 unregister 0x200000400000 10485760 PASSED 00:05:24.983 passed 00:05:24.983 00:05:24.983 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.983 suites 1 1 n/a 0 0 00:05:24.983 tests 1 1 1 0 0 00:05:24.983 asserts 15 15 15 0 n/a 00:05:24.983 00:05:24.983 Elapsed time = 0.007 seconds 00:05:24.983 00:05:24.983 real 0m0.060s 00:05:24.983 user 0m0.020s 00:05:24.983 sys 0m0.040s 00:05:24.983 11:52:02 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.983 11:52:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 ************************************ 00:05:24.983 END TEST env_mem_callbacks 00:05:24.983 ************************************ 00:05:24.983 00:05:24.983 real 0m6.377s 00:05:24.983 user 0m4.456s 00:05:24.983 sys 0m0.966s 00:05:24.983 11:52:02 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.983 11:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 ************************************ 00:05:24.983 END TEST env 00:05:24.983 ************************************ 00:05:24.983 11:52:02 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:24.983 11:52:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.983 11:52:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.983 11:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 ************************************ 00:05:24.983 START TEST rpc 00:05:24.983 ************************************ 00:05:24.983 11:52:02 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.242 * Looking for test storage... 00:05:25.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.242 11:52:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3931468 00:05:25.242 11:52:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.242 11:52:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:25.242 11:52:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3931468 00:05:25.242 11:52:02 rpc -- common/autotest_common.sh@831 -- # '[' -z 3931468 ']' 00:05:25.242 11:52:02 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.242 11:52:02 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.242 11:52:02 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.242 11:52:02 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.242 11:52:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.242 [2024-07-25 11:52:02.394815] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:25.242 [2024-07-25 11:52:02.394870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931468 ] 00:05:25.242 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.242 [2024-07-25 11:52:02.469673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.502 [2024-07-25 11:52:02.559274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.502 [2024-07-25 11:52:02.559318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3931468' to capture a snapshot of events at runtime. 00:05:25.502 [2024-07-25 11:52:02.559328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.502 [2024-07-25 11:52:02.559338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.502 [2024-07-25 11:52:02.559345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3931468 for offline analysis/debug. 00:05:25.502 [2024-07-25 11:52:02.559370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.502 11:52:02 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.502 11:52:02 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.502 11:52:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.502 11:52:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.502 11:52:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.502 11:52:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.502 11:52:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.502 11:52:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.502 11:52:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.761 ************************************ 00:05:25.761 START TEST rpc_integrity 00:05:25.761 ************************************ 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.761 { 00:05:25.761 "name": "Malloc0", 00:05:25.761 "aliases": [ 00:05:25.761 "97c393cd-a240-4d5a-9231-81335b995406" 00:05:25.761 ], 00:05:25.761 "product_name": "Malloc disk", 00:05:25.761 "block_size": 512, 00:05:25.761 "num_blocks": 16384, 00:05:25.761 "uuid": "97c393cd-a240-4d5a-9231-81335b995406", 00:05:25.761 "assigned_rate_limits": { 00:05:25.761 "rw_ios_per_sec": 0, 00:05:25.761 "rw_mbytes_per_sec": 0, 00:05:25.761 "r_mbytes_per_sec": 0, 00:05:25.761 "w_mbytes_per_sec": 0 00:05:25.761 }, 00:05:25.761 "claimed": false, 00:05:25.761 "zoned": false, 00:05:25.761 "supported_io_types": { 00:05:25.761 "read": true, 00:05:25.761 "write": true, 00:05:25.761 "unmap": true, 00:05:25.761 "flush": true, 00:05:25.761 "reset": true, 00:05:25.761 "nvme_admin": false, 00:05:25.761 "nvme_io": false, 00:05:25.761 "nvme_io_md": false, 00:05:25.761 "write_zeroes": true, 00:05:25.761 "zcopy": true, 00:05:25.761 "get_zone_info": false, 00:05:25.761 "zone_management": false, 00:05:25.761 "zone_append": false, 00:05:25.761 "compare": false, 00:05:25.761 "compare_and_write": false, 00:05:25.761 "abort": true, 00:05:25.761 "seek_hole": false, 00:05:25.761 "seek_data": false, 00:05:25.761 "copy": true, 00:05:25.761 "nvme_iov_md": false 00:05:25.761 }, 00:05:25.761 "memory_domains": [ 00:05:25.761 { 00:05:25.761 "dma_device_id": "system", 00:05:25.761 "dma_device_type": 1 00:05:25.761 }, 00:05:25.761 { 00:05:25.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.761 "dma_device_type": 2 00:05:25.761 } 00:05:25.761 ], 00:05:25.761 "driver_specific": {} 00:05:25.761 } 00:05:25.761 ]' 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.761 [2024-07-25 11:52:02.960958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.761 [2024-07-25 11:52:02.960996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.761 [2024-07-25 11:52:02.961012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbadc80 00:05:25.761 [2024-07-25 11:52:02.961021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.761 [2024-07-25 11:52:02.962579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.761 [2024-07-25 11:52:02.962613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.761 Passthru0 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.761 11:52:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.761 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.761 { 00:05:25.761 "name": "Malloc0", 00:05:25.761 "aliases": [ 00:05:25.761 "97c393cd-a240-4d5a-9231-81335b995406" 00:05:25.761 ], 00:05:25.761 "product_name": "Malloc disk", 00:05:25.761 "block_size": 512, 00:05:25.761 "num_blocks": 16384, 00:05:25.761 "uuid": "97c393cd-a240-4d5a-9231-81335b995406", 00:05:25.761 "assigned_rate_limits": { 00:05:25.761 "rw_ios_per_sec": 0, 00:05:25.761 "rw_mbytes_per_sec": 0, 00:05:25.761 "r_mbytes_per_sec": 0, 00:05:25.761 "w_mbytes_per_sec": 0 00:05:25.761 }, 00:05:25.761 "claimed": true, 00:05:25.761 "claim_type": "exclusive_write", 00:05:25.761 "zoned": false, 00:05:25.761 "supported_io_types": { 00:05:25.761 "read": true, 00:05:25.761 "write": true, 00:05:25.761 "unmap": true, 00:05:25.761 "flush": true, 00:05:25.762 "reset": true, 00:05:25.762 "nvme_admin": false, 00:05:25.762 "nvme_io": false, 00:05:25.762 "nvme_io_md": false, 00:05:25.762 "write_zeroes": true, 00:05:25.762 "zcopy": true, 00:05:25.762 "get_zone_info": false, 00:05:25.762 "zone_management": false, 00:05:25.762 "zone_append": false, 00:05:25.762 "compare": false, 00:05:25.762 "compare_and_write": false, 00:05:25.762 "abort": true, 00:05:25.762 "seek_hole": false, 00:05:25.762 "seek_data": false, 00:05:25.762 "copy": true, 00:05:25.762 "nvme_iov_md": false 00:05:25.762 }, 00:05:25.762 "memory_domains": [ 00:05:25.762 { 00:05:25.762 "dma_device_id": "system", 00:05:25.762 "dma_device_type": 1 00:05:25.762 }, 00:05:25.762 { 00:05:25.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.762 "dma_device_type": 2 00:05:25.762 } 00:05:25.762 ], 00:05:25.762 "driver_specific": {} 00:05:25.762 }, 00:05:25.762 { 00:05:25.762 "name": "Passthru0", 00:05:25.762 "aliases": [ 00:05:25.762 "202a685a-4c09-5ccc-b069-215d6a8366cd" 00:05:25.762 ], 00:05:25.762 "product_name": "passthru", 00:05:25.762 "block_size": 512, 00:05:25.762 "num_blocks": 16384, 00:05:25.762 "uuid": "202a685a-4c09-5ccc-b069-215d6a8366cd", 00:05:25.762 "assigned_rate_limits": { 00:05:25.762 "rw_ios_per_sec": 0, 00:05:25.762 "rw_mbytes_per_sec": 0, 00:05:25.762 "r_mbytes_per_sec": 0, 00:05:25.762 "w_mbytes_per_sec": 0 00:05:25.762 }, 00:05:25.762 "claimed": false, 00:05:25.762 "zoned": false, 00:05:25.762 "supported_io_types": { 00:05:25.762 "read": true, 00:05:25.762 "write": true, 00:05:25.762 "unmap": true, 00:05:25.762 "flush": true, 00:05:25.762 "reset": true, 00:05:25.762 "nvme_admin": false, 00:05:25.762 "nvme_io": false, 00:05:25.762 "nvme_io_md": false, 00:05:25.762 "write_zeroes": true, 00:05:25.762 "zcopy": true, 00:05:25.762 "get_zone_info": false, 00:05:25.762 "zone_management": false, 00:05:25.762 "zone_append": false, 00:05:25.762 "compare": false, 00:05:25.762 "compare_and_write": false, 00:05:25.762 "abort": true, 00:05:25.762 "seek_hole": false, 00:05:25.762 "seek_data": false, 00:05:25.762 "copy": true, 00:05:25.762 "nvme_iov_md": false 00:05:25.762 }, 00:05:25.762 "memory_domains": [ 00:05:25.762 { 00:05:25.762 "dma_device_id": "system", 00:05:25.762 "dma_device_type": 1 00:05:25.762 }, 00:05:25.762 { 00:05:25.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.762 "dma_device_type": 2 00:05:25.762 } 00:05:25.762 ], 00:05:25.762 "driver_specific": { 00:05:25.762 "passthru": { 00:05:25.762 "name": "Passthru0", 00:05:25.762 "base_bdev_name": "Malloc0" 00:05:25.762 } 00:05:25.762 } 00:05:25.762 } 00:05:25.762 ]' 00:05:25.762 11:52:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.762 11:52:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.762 11:52:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.762 11:52:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.762 11:52:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.762 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.021 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.021 11:52:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.021 11:52:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.021 11:52:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.021 00:05:26.021 real 0m0.297s 00:05:26.021 user 0m0.188s 00:05:26.021 sys 0m0.039s 00:05:26.021 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.021 11:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.021 ************************************ 00:05:26.021 END TEST rpc_integrity 00:05:26.021 ************************************ 00:05:26.021 11:52:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.021 11:52:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.021 11:52:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.021 11:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.021 ************************************ 00:05:26.021 START TEST rpc_plugins 00:05:26.021 ************************************ 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.021 { 00:05:26.021 "name": "Malloc1", 00:05:26.021 "aliases": [ 00:05:26.021 "f26f6fb0-c827-482d-9082-a723e6da189a" 00:05:26.021 ], 00:05:26.021 "product_name": "Malloc disk", 00:05:26.021 "block_size": 4096, 00:05:26.021 "num_blocks": 256, 00:05:26.021 "uuid": "f26f6fb0-c827-482d-9082-a723e6da189a", 00:05:26.021 "assigned_rate_limits": { 00:05:26.021 "rw_ios_per_sec": 0, 00:05:26.021 "rw_mbytes_per_sec": 0, 00:05:26.021 "r_mbytes_per_sec": 0, 00:05:26.021 "w_mbytes_per_sec": 0 00:05:26.021 }, 00:05:26.021 "claimed": false, 00:05:26.021 "zoned": false, 00:05:26.021 "supported_io_types": { 00:05:26.021 "read": true, 00:05:26.021 "write": true, 00:05:26.021 "unmap": true, 00:05:26.021 "flush": true, 00:05:26.021 "reset": true, 00:05:26.021 "nvme_admin": false, 00:05:26.021 "nvme_io": false, 00:05:26.021 "nvme_io_md": false, 00:05:26.021 "write_zeroes": true, 00:05:26.021 "zcopy": true, 00:05:26.021 "get_zone_info": false, 00:05:26.021 "zone_management": false, 00:05:26.021 "zone_append": false, 00:05:26.021 "compare": false, 00:05:26.021 "compare_and_write": false, 00:05:26.021 "abort": true, 00:05:26.021 "seek_hole": false, 00:05:26.021 "seek_data": false, 00:05:26.021 "copy": true, 00:05:26.021 "nvme_iov_md": false 00:05:26.021 }, 00:05:26.021 "memory_domains": [ 00:05:26.021 { 00:05:26.021 "dma_device_id": "system", 00:05:26.021 "dma_device_type": 1 00:05:26.021 }, 00:05:26.021 { 00:05:26.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.021 "dma_device_type": 2 00:05:26.021 } 00:05:26.021 ], 00:05:26.021 "driver_specific": {} 00:05:26.021 } 00:05:26.021 ]' 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.021 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.021 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.287 11:52:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.287 00:05:26.287 real 0m0.149s 00:05:26.287 user 0m0.099s 00:05:26.287 sys 0m0.012s 00:05:26.287 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.287 11:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.287 ************************************ 00:05:26.287 END TEST rpc_plugins 00:05:26.288 ************************************ 00:05:26.288 11:52:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.288 11:52:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.288 11:52:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.288 11:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.288 ************************************ 00:05:26.288 START TEST rpc_trace_cmd_test 00:05:26.288 ************************************ 00:05:26.288 11:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:26.288 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.288 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.288 11:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.288 11:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.288 11:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.288 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.288 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3931468", 00:05:26.288 "tpoint_group_mask": "0x8", 00:05:26.288 "iscsi_conn": { 00:05:26.288 "mask": "0x2", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "scsi": { 00:05:26.288 "mask": "0x4", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "bdev": { 00:05:26.288 "mask": "0x8", 00:05:26.288 "tpoint_mask": "0xffffffffffffffff" 00:05:26.288 }, 00:05:26.288 "nvmf_rdma": { 00:05:26.288 "mask": "0x10", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "nvmf_tcp": { 00:05:26.288 "mask": "0x20", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "ftl": { 00:05:26.288 "mask": "0x40", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "blobfs": { 00:05:26.288 "mask": "0x80", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "dsa": { 00:05:26.288 "mask": "0x200", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "thread": { 00:05:26.288 "mask": "0x400", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "nvme_pcie": { 00:05:26.288 "mask": "0x800", 00:05:26.288 "tpoint_mask": "0x0" 00:05:26.288 }, 00:05:26.288 "iaa": { 00:05:26.289 "mask": "0x1000", 00:05:26.289 "tpoint_mask": "0x0" 00:05:26.289 }, 00:05:26.289 "nvme_tcp": { 00:05:26.289 "mask": "0x2000", 00:05:26.289 "tpoint_mask": "0x0" 00:05:26.289 }, 00:05:26.289 "bdev_nvme": { 00:05:26.289 "mask": "0x4000", 00:05:26.289 "tpoint_mask": "0x0" 00:05:26.289 }, 00:05:26.289 "sock": { 00:05:26.289 "mask": "0x8000", 00:05:26.289 "tpoint_mask": "0x0" 00:05:26.289 } 00:05:26.289 }' 00:05:26.289 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.289 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:26.289 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.289 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.289 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.289 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.289 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.550 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.550 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.550 11:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.550 00:05:26.550 real 0m0.238s 00:05:26.550 user 0m0.207s 00:05:26.550 sys 0m0.024s 00:05:26.550 11:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.550 11:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.550 ************************************ 00:05:26.550 END TEST rpc_trace_cmd_test 00:05:26.550 ************************************ 00:05:26.550 11:52:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.550 11:52:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.550 11:52:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.550 11:52:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.550 11:52:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.550 11:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.550 ************************************ 00:05:26.550 START TEST rpc_daemon_integrity 00:05:26.550 ************************************ 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.550 { 00:05:26.550 "name": "Malloc2", 00:05:26.550 "aliases": [ 00:05:26.550 "45b2cfa3-e1cd-473a-a9e8-65a8091454a7" 00:05:26.550 ], 00:05:26.550 "product_name": "Malloc disk", 00:05:26.550 "block_size": 512, 00:05:26.550 "num_blocks": 16384, 00:05:26.550 "uuid": "45b2cfa3-e1cd-473a-a9e8-65a8091454a7", 00:05:26.550 "assigned_rate_limits": { 00:05:26.550 "rw_ios_per_sec": 0, 00:05:26.550 "rw_mbytes_per_sec": 0, 00:05:26.550 "r_mbytes_per_sec": 0, 00:05:26.550 "w_mbytes_per_sec": 0 00:05:26.550 }, 00:05:26.550 "claimed": false, 00:05:26.550 "zoned": false, 00:05:26.550 "supported_io_types": { 00:05:26.550 "read": true, 00:05:26.550 "write": true, 00:05:26.550 "unmap": true, 00:05:26.550 "flush": true, 00:05:26.550 "reset": true, 00:05:26.550 "nvme_admin": false, 00:05:26.550 "nvme_io": false, 00:05:26.550 "nvme_io_md": false, 00:05:26.550 "write_zeroes": true, 00:05:26.550 "zcopy": true, 00:05:26.550 "get_zone_info": false, 00:05:26.550 "zone_management": false, 00:05:26.550 "zone_append": false, 00:05:26.550 "compare": false, 00:05:26.550 "compare_and_write": false, 00:05:26.550 "abort": true, 00:05:26.550 "seek_hole": false, 00:05:26.550 "seek_data": false, 00:05:26.550 "copy": true, 00:05:26.550 "nvme_iov_md": false 00:05:26.550 }, 00:05:26.550 "memory_domains": [ 00:05:26.550 { 00:05:26.550 "dma_device_id": "system", 00:05:26.550 "dma_device_type": 1 00:05:26.550 }, 00:05:26.550 { 00:05:26.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.550 "dma_device_type": 2 00:05:26.550 } 00:05:26.550 ], 00:05:26.550 "driver_specific": {} 00:05:26.550 } 00:05:26.550 ]' 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.550 [2024-07-25 11:52:03.843502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.550 [2024-07-25 11:52:03.843536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.550 [2024-07-25 11:52:03.843554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbaf1c0 00:05:26.550 [2024-07-25 11:52:03.843563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.550 [2024-07-25 11:52:03.844934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.550 [2024-07-25 11:52:03.844959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.550 Passthru0 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.550 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.812 { 00:05:26.812 "name": "Malloc2", 00:05:26.812 "aliases": [ 00:05:26.812 "45b2cfa3-e1cd-473a-a9e8-65a8091454a7" 00:05:26.812 ], 00:05:26.812 "product_name": "Malloc disk", 00:05:26.812 "block_size": 512, 00:05:26.812 "num_blocks": 16384, 00:05:26.812 "uuid": "45b2cfa3-e1cd-473a-a9e8-65a8091454a7", 00:05:26.812 "assigned_rate_limits": { 00:05:26.812 "rw_ios_per_sec": 0, 00:05:26.812 "rw_mbytes_per_sec": 0, 00:05:26.812 "r_mbytes_per_sec": 0, 00:05:26.812 "w_mbytes_per_sec": 0 00:05:26.812 }, 00:05:26.812 "claimed": true, 00:05:26.812 "claim_type": "exclusive_write", 00:05:26.812 "zoned": false, 00:05:26.812 "supported_io_types": { 00:05:26.812 "read": true, 00:05:26.812 "write": true, 00:05:26.812 "unmap": true, 00:05:26.812 "flush": true, 00:05:26.812 "reset": true, 00:05:26.812 "nvme_admin": false, 00:05:26.812 "nvme_io": false, 00:05:26.812 "nvme_io_md": false, 00:05:26.812 "write_zeroes": true, 00:05:26.812 "zcopy": true, 00:05:26.812 "get_zone_info": false, 00:05:26.812 "zone_management": false, 00:05:26.812 "zone_append": false, 00:05:26.812 "compare": false, 00:05:26.812 "compare_and_write": false, 00:05:26.812 "abort": true, 00:05:26.812 "seek_hole": false, 00:05:26.812 "seek_data": false, 00:05:26.812 "copy": true, 00:05:26.812 "nvme_iov_md": false 00:05:26.812 }, 00:05:26.812 "memory_domains": [ 00:05:26.812 { 00:05:26.812 "dma_device_id": "system", 00:05:26.812 "dma_device_type": 1 00:05:26.812 }, 00:05:26.812 { 00:05:26.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.812 "dma_device_type": 2 00:05:26.812 } 00:05:26.812 ], 00:05:26.812 "driver_specific": {} 00:05:26.812 }, 00:05:26.812 { 00:05:26.812 "name": "Passthru0", 00:05:26.812 "aliases": [ 00:05:26.812 "238bf24f-7f77-5df8-bcf9-12cdab305856" 00:05:26.812 ], 00:05:26.812 "product_name": "passthru", 00:05:26.812 "block_size": 512, 00:05:26.812 "num_blocks": 16384, 00:05:26.812 "uuid": "238bf24f-7f77-5df8-bcf9-12cdab305856", 00:05:26.812 "assigned_rate_limits": { 00:05:26.812 "rw_ios_per_sec": 0, 00:05:26.812 "rw_mbytes_per_sec": 0, 00:05:26.812 "r_mbytes_per_sec": 0, 00:05:26.812 "w_mbytes_per_sec": 0 00:05:26.812 }, 00:05:26.812 "claimed": false, 00:05:26.812 "zoned": false, 00:05:26.812 "supported_io_types": { 00:05:26.812 "read": true, 00:05:26.812 "write": true, 00:05:26.812 "unmap": true, 00:05:26.812 "flush": true, 00:05:26.812 "reset": true, 00:05:26.812 "nvme_admin": false, 00:05:26.812 "nvme_io": false, 00:05:26.812 "nvme_io_md": false, 00:05:26.812 "write_zeroes": true, 00:05:26.812 "zcopy": true, 00:05:26.812 "get_zone_info": false, 00:05:26.812 "zone_management": false, 00:05:26.812 "zone_append": false, 00:05:26.812 "compare": false, 00:05:26.812 "compare_and_write": false, 00:05:26.812 "abort": true, 00:05:26.812 "seek_hole": false, 00:05:26.812 "seek_data": false, 00:05:26.812 "copy": true, 00:05:26.812 "nvme_iov_md": false 00:05:26.812 }, 00:05:26.812 "memory_domains": [ 00:05:26.812 { 00:05:26.812 "dma_device_id": "system", 00:05:26.812 "dma_device_type": 1 00:05:26.812 }, 00:05:26.812 { 00:05:26.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.812 "dma_device_type": 2 00:05:26.812 } 00:05:26.812 ], 00:05:26.812 "driver_specific": { 00:05:26.812 "passthru": { 00:05:26.812 "name": "Passthru0", 00:05:26.812 "base_bdev_name": "Malloc2" 00:05:26.812 } 00:05:26.812 } 00:05:26.812 } 00:05:26.812 ]' 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.812 00:05:26.812 real 0m0.295s 00:05:26.812 user 0m0.195s 00:05:26.812 sys 0m0.034s 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.812 11:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.812 ************************************ 00:05:26.812 END TEST rpc_daemon_integrity 00:05:26.812 ************************************ 00:05:26.812 11:52:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.812 11:52:04 rpc -- rpc/rpc.sh@84 -- # killprocess 3931468 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@950 -- # '[' -z 3931468 ']' 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@954 -- # kill -0 3931468 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@955 -- # uname 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3931468 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3931468' 00:05:26.812 killing process with pid 3931468 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@969 -- # kill 3931468 00:05:26.812 11:52:04 rpc -- common/autotest_common.sh@974 -- # wait 3931468 00:05:27.382 00:05:27.382 real 0m2.166s 00:05:27.382 user 0m2.854s 00:05:27.382 sys 0m0.680s 00:05:27.382 11:52:04 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.382 11:52:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.382 ************************************ 00:05:27.382 END TEST rpc 00:05:27.382 ************************************ 00:05:27.382 11:52:04 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.382 11:52:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.382 11:52:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.382 11:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.382 ************************************ 00:05:27.382 START TEST skip_rpc 00:05:27.382 ************************************ 00:05:27.382 11:52:04 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.382 * Looking for test storage... 00:05:27.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.382 11:52:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.382 11:52:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:27.382 11:52:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:27.382 11:52:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.382 11:52:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.382 11:52:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.382 ************************************ 00:05:27.382 START TEST skip_rpc 00:05:27.382 ************************************ 00:05:27.382 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:27.382 11:52:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3932157 00:05:27.382 11:52:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.382 11:52:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:27.382 11:52:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:27.642 [2024-07-25 11:52:04.698341] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:27.642 [2024-07-25 11:52:04.698459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932157 ] 00:05:27.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.642 [2024-07-25 11:52:04.813189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.642 [2024-07-25 11:52:04.900151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3932157 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3932157 ']' 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3932157 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3932157 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3932157' 00:05:32.914 killing process with pid 3932157 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3932157 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3932157 00:05:32.914 00:05:32.914 real 0m5.397s 00:05:32.914 user 0m5.110s 00:05:32.914 sys 0m0.316s 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.914 11:52:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.914 ************************************ 00:05:32.914 END TEST skip_rpc 00:05:32.914 ************************************ 00:05:32.914 11:52:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:32.914 11:52:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.914 11:52:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.914 11:52:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.914 ************************************ 00:05:32.914 START TEST skip_rpc_with_json 00:05:32.914 ************************************ 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3933229 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3933229 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3933229 ']' 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.914 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.914 [2024-07-25 11:52:10.130054] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:32.914 [2024-07-25 11:52:10.130111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933229 ] 00:05:32.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.914 [2024-07-25 11:52:10.211123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.173 [2024-07-25 11:52:10.297765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.109 [2024-07-25 11:52:11.070138] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:34.109 request: 00:05:34.109 { 00:05:34.109 "trtype": "tcp", 00:05:34.109 "method": "nvmf_get_transports", 00:05:34.109 "req_id": 1 00:05:34.109 } 00:05:34.109 Got JSON-RPC error response 00:05:34.109 response: 00:05:34.109 { 00:05:34.109 "code": -19, 00:05:34.109 "message": "No such device" 00:05:34.109 } 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.109 [2024-07-25 11:52:11.082279] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.109 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.109 { 00:05:34.109 "subsystems": [ 00:05:34.109 { 00:05:34.109 "subsystem": "vfio_user_target", 00:05:34.109 "config": null 00:05:34.109 }, 00:05:34.109 { 00:05:34.109 "subsystem": "keyring", 00:05:34.109 "config": [] 00:05:34.109 }, 00:05:34.109 { 00:05:34.109 "subsystem": "iobuf", 00:05:34.109 "config": [ 00:05:34.109 { 00:05:34.109 "method": "iobuf_set_options", 00:05:34.110 "params": { 00:05:34.110 "small_pool_count": 8192, 00:05:34.110 "large_pool_count": 1024, 00:05:34.110 "small_bufsize": 8192, 00:05:34.110 "large_bufsize": 135168 00:05:34.110 } 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "sock", 00:05:34.110 "config": [ 00:05:34.110 { 00:05:34.110 "method": "sock_set_default_impl", 00:05:34.110 "params": { 00:05:34.110 "impl_name": "posix" 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "sock_impl_set_options", 00:05:34.110 "params": { 00:05:34.110 "impl_name": "ssl", 00:05:34.110 "recv_buf_size": 4096, 00:05:34.110 "send_buf_size": 4096, 00:05:34.110 "enable_recv_pipe": true, 00:05:34.110 "enable_quickack": false, 00:05:34.110 "enable_placement_id": 0, 00:05:34.110 "enable_zerocopy_send_server": true, 00:05:34.110 "enable_zerocopy_send_client": false, 00:05:34.110 "zerocopy_threshold": 0, 00:05:34.110 "tls_version": 0, 00:05:34.110 "enable_ktls": false 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "sock_impl_set_options", 00:05:34.110 "params": { 00:05:34.110 "impl_name": "posix", 00:05:34.110 "recv_buf_size": 2097152, 00:05:34.110 "send_buf_size": 2097152, 00:05:34.110 "enable_recv_pipe": true, 00:05:34.110 "enable_quickack": false, 00:05:34.110 "enable_placement_id": 0, 00:05:34.110 "enable_zerocopy_send_server": true, 00:05:34.110 "enable_zerocopy_send_client": false, 00:05:34.110 "zerocopy_threshold": 0, 00:05:34.110 "tls_version": 0, 00:05:34.110 "enable_ktls": false 00:05:34.110 } 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "vmd", 00:05:34.110 "config": [] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "accel", 00:05:34.110 "config": [ 00:05:34.110 { 00:05:34.110 "method": "accel_set_options", 00:05:34.110 "params": { 00:05:34.110 "small_cache_size": 128, 00:05:34.110 "large_cache_size": 16, 00:05:34.110 "task_count": 2048, 00:05:34.110 "sequence_count": 2048, 00:05:34.110 "buf_count": 2048 00:05:34.110 } 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "bdev", 00:05:34.110 "config": [ 00:05:34.110 { 00:05:34.110 "method": "bdev_set_options", 00:05:34.110 "params": { 00:05:34.110 "bdev_io_pool_size": 65535, 00:05:34.110 "bdev_io_cache_size": 256, 00:05:34.110 "bdev_auto_examine": true, 00:05:34.110 "iobuf_small_cache_size": 128, 00:05:34.110 "iobuf_large_cache_size": 16 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "bdev_raid_set_options", 00:05:34.110 "params": { 00:05:34.110 "process_window_size_kb": 1024, 00:05:34.110 "process_max_bandwidth_mb_sec": 0 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "bdev_iscsi_set_options", 00:05:34.110 "params": { 00:05:34.110 "timeout_sec": 30 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "bdev_nvme_set_options", 00:05:34.110 "params": { 00:05:34.110 "action_on_timeout": "none", 00:05:34.110 "timeout_us": 0, 00:05:34.110 "timeout_admin_us": 0, 00:05:34.110 "keep_alive_timeout_ms": 10000, 00:05:34.110 "arbitration_burst": 0, 00:05:34.110 "low_priority_weight": 0, 00:05:34.110 "medium_priority_weight": 0, 00:05:34.110 "high_priority_weight": 0, 00:05:34.110 "nvme_adminq_poll_period_us": 10000, 00:05:34.110 "nvme_ioq_poll_period_us": 0, 00:05:34.110 "io_queue_requests": 0, 00:05:34.110 "delay_cmd_submit": true, 00:05:34.110 "transport_retry_count": 4, 00:05:34.110 "bdev_retry_count": 3, 00:05:34.110 "transport_ack_timeout": 0, 00:05:34.110 "ctrlr_loss_timeout_sec": 0, 00:05:34.110 "reconnect_delay_sec": 0, 00:05:34.110 "fast_io_fail_timeout_sec": 0, 00:05:34.110 "disable_auto_failback": false, 00:05:34.110 "generate_uuids": false, 00:05:34.110 "transport_tos": 0, 00:05:34.110 "nvme_error_stat": false, 00:05:34.110 "rdma_srq_size": 0, 00:05:34.110 "io_path_stat": false, 00:05:34.110 "allow_accel_sequence": false, 00:05:34.110 "rdma_max_cq_size": 0, 00:05:34.110 "rdma_cm_event_timeout_ms": 0, 00:05:34.110 "dhchap_digests": [ 00:05:34.110 "sha256", 00:05:34.110 "sha384", 00:05:34.110 "sha512" 00:05:34.110 ], 00:05:34.110 "dhchap_dhgroups": [ 00:05:34.110 "null", 00:05:34.110 "ffdhe2048", 00:05:34.110 "ffdhe3072", 00:05:34.110 "ffdhe4096", 00:05:34.110 "ffdhe6144", 00:05:34.110 "ffdhe8192" 00:05:34.110 ] 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "bdev_nvme_set_hotplug", 00:05:34.110 "params": { 00:05:34.110 "period_us": 100000, 00:05:34.110 "enable": false 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "bdev_wait_for_examine" 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "scsi", 00:05:34.110 "config": null 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "scheduler", 00:05:34.110 "config": [ 00:05:34.110 { 00:05:34.110 "method": "framework_set_scheduler", 00:05:34.110 "params": { 00:05:34.110 "name": "static" 00:05:34.110 } 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "vhost_scsi", 00:05:34.110 "config": [] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "vhost_blk", 00:05:34.110 "config": [] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "ublk", 00:05:34.110 "config": [] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "nbd", 00:05:34.110 "config": [] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "nvmf", 00:05:34.110 "config": [ 00:05:34.110 { 00:05:34.110 "method": "nvmf_set_config", 00:05:34.110 "params": { 00:05:34.110 "discovery_filter": "match_any", 00:05:34.110 "admin_cmd_passthru": { 00:05:34.110 "identify_ctrlr": false 00:05:34.110 } 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "nvmf_set_max_subsystems", 00:05:34.110 "params": { 00:05:34.110 "max_subsystems": 1024 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "nvmf_set_crdt", 00:05:34.110 "params": { 00:05:34.110 "crdt1": 0, 00:05:34.110 "crdt2": 0, 00:05:34.110 "crdt3": 0 00:05:34.110 } 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "method": "nvmf_create_transport", 00:05:34.110 "params": { 00:05:34.110 "trtype": "TCP", 00:05:34.110 "max_queue_depth": 128, 00:05:34.110 "max_io_qpairs_per_ctrlr": 127, 00:05:34.110 "in_capsule_data_size": 4096, 00:05:34.110 "max_io_size": 131072, 00:05:34.110 "io_unit_size": 131072, 00:05:34.110 "max_aq_depth": 128, 00:05:34.110 "num_shared_buffers": 511, 00:05:34.110 "buf_cache_size": 4294967295, 00:05:34.110 "dif_insert_or_strip": false, 00:05:34.110 "zcopy": false, 00:05:34.110 "c2h_success": true, 00:05:34.110 "sock_priority": 0, 00:05:34.110 "abort_timeout_sec": 1, 00:05:34.110 "ack_timeout": 0, 00:05:34.110 "data_wr_pool_size": 0 00:05:34.110 } 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 }, 00:05:34.110 { 00:05:34.110 "subsystem": "iscsi", 00:05:34.110 "config": [ 00:05:34.110 { 00:05:34.110 "method": "iscsi_set_options", 00:05:34.110 "params": { 00:05:34.110 "node_base": "iqn.2016-06.io.spdk", 00:05:34.110 "max_sessions": 128, 00:05:34.110 "max_connections_per_session": 2, 00:05:34.110 "max_queue_depth": 64, 00:05:34.110 "default_time2wait": 2, 00:05:34.110 "default_time2retain": 20, 00:05:34.110 "first_burst_length": 8192, 00:05:34.110 "immediate_data": true, 00:05:34.110 "allow_duplicated_isid": false, 00:05:34.110 "error_recovery_level": 0, 00:05:34.110 "nop_timeout": 60, 00:05:34.110 "nop_in_interval": 30, 00:05:34.110 "disable_chap": false, 00:05:34.110 "require_chap": false, 00:05:34.110 "mutual_chap": false, 00:05:34.110 "chap_group": 0, 00:05:34.110 "max_large_datain_per_connection": 64, 00:05:34.110 "max_r2t_per_connection": 4, 00:05:34.110 "pdu_pool_size": 36864, 00:05:34.110 "immediate_data_pool_size": 16384, 00:05:34.110 "data_out_pool_size": 2048 00:05:34.110 } 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 } 00:05:34.110 ] 00:05:34.110 } 00:05:34.110 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:34.110 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3933229 00:05:34.110 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3933229 ']' 00:05:34.110 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3933229 00:05:34.110 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:34.110 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.110 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3933229 00:05:34.111 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.111 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.111 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3933229' 00:05:34.111 killing process with pid 3933229 00:05:34.111 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3933229 00:05:34.111 11:52:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3933229 00:05:34.369 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3933504 00:05:34.369 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:34.369 11:52:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3933504 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3933504 ']' 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3933504 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3933504 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3933504' 00:05:39.639 killing process with pid 3933504 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3933504 00:05:39.639 11:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3933504 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.897 00:05:39.897 real 0m6.962s 00:05:39.897 user 0m6.849s 00:05:39.897 sys 0m0.686s 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.897 ************************************ 00:05:39.897 END TEST skip_rpc_with_json 00:05:39.897 ************************************ 00:05:39.897 11:52:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.897 11:52:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.897 11:52:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.897 11:52:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.897 ************************************ 00:05:39.897 START TEST skip_rpc_with_delay 00:05:39.897 ************************************ 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.897 [2024-07-25 11:52:17.162711] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.897 [2024-07-25 11:52:17.162796] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.897 00:05:39.897 real 0m0.076s 00:05:39.897 user 0m0.052s 00:05:39.897 sys 0m0.024s 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.897 11:52:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.897 ************************************ 00:05:39.897 END TEST skip_rpc_with_delay 00:05:39.897 ************************************ 00:05:40.156 11:52:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:40.156 11:52:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:40.156 11:52:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:40.156 11:52:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.156 11:52:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.156 11:52:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.156 ************************************ 00:05:40.156 START TEST exit_on_failed_rpc_init 00:05:40.156 ************************************ 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3934606 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3934606 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3934606 ']' 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.156 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.156 [2024-07-25 11:52:17.314184] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:40.156 [2024-07-25 11:52:17.314251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934606 ] 00:05:40.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.156 [2024-07-25 11:52:17.404794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.415 [2024-07-25 11:52:17.497029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.673 11:52:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.673 [2024-07-25 11:52:17.824209] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:40.673 [2024-07-25 11:52:17.824269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934618 ] 00:05:40.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.673 [2024-07-25 11:52:17.906945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.932 [2024-07-25 11:52:18.009594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.932 [2024-07-25 11:52:18.009684] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.932 [2024-07-25 11:52:18.009700] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.932 [2024-07-25 11:52:18.009710] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3934606 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3934606 ']' 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3934606 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3934606 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3934606' 00:05:40.932 killing process with pid 3934606 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3934606 00:05:40.932 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3934606 00:05:41.540 00:05:41.540 real 0m1.247s 00:05:41.540 user 0m1.676s 00:05:41.540 sys 0m0.450s 00:05:41.540 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.540 11:52:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.540 ************************************ 00:05:41.540 END TEST exit_on_failed_rpc_init 00:05:41.540 ************************************ 00:05:41.540 11:52:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.540 00:05:41.540 real 0m14.055s 00:05:41.540 user 0m13.834s 00:05:41.540 sys 0m1.730s 00:05:41.540 11:52:18 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.540 11:52:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.540 ************************************ 00:05:41.540 END TEST skip_rpc 00:05:41.540 ************************************ 00:05:41.540 11:52:18 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.540 11:52:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.540 11:52:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.540 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:41.540 ************************************ 00:05:41.540 START TEST rpc_client 00:05:41.540 ************************************ 00:05:41.540 11:52:18 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.540 * Looking for test storage... 00:05:41.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:41.540 11:52:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:41.540 OK 00:05:41.540 11:52:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.540 00:05:41.540 real 0m0.112s 00:05:41.540 user 0m0.049s 00:05:41.540 sys 0m0.071s 00:05:41.540 11:52:18 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.540 11:52:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.540 ************************************ 00:05:41.540 END TEST rpc_client 00:05:41.540 ************************************ 00:05:41.540 11:52:18 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.540 11:52:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.540 11:52:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.540 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:41.540 ************************************ 00:05:41.540 START TEST json_config 00:05:41.540 ************************************ 00:05:41.540 11:52:18 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.798 11:52:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.798 11:52:18 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.798 11:52:18 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.798 11:52:18 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.798 11:52:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.798 11:52:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.798 11:52:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.798 11:52:18 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.798 11:52:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@47 -- # : 0 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.798 11:52:18 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.799 11:52:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.799 11:52:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.799 11:52:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.799 11:52:18 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.799 11:52:18 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.799 11:52:18 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:41.799 INFO: JSON configuration test init 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.799 11:52:18 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:41.799 11:52:18 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.799 11:52:18 json_config -- json_config/common.sh@10 -- # shift 00:05:41.799 11:52:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.799 11:52:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.799 11:52:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.799 11:52:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.799 11:52:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.799 11:52:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3934987 00:05:41.799 11:52:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.799 Waiting for target to run... 00:05:41.799 11:52:18 json_config -- json_config/common.sh@25 -- # waitforlisten 3934987 /var/tmp/spdk_tgt.sock 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@831 -- # '[' -z 3934987 ']' 00:05:41.799 11:52:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.799 11:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.799 [2024-07-25 11:52:18.958199] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:41.799 [2024-07-25 11:52:18.958260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934987 ] 00:05:41.799 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.057 [2024-07-25 11:52:19.263508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.057 [2024-07-25 11:52:19.343260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.992 11:52:19 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.992 11:52:19 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:42.992 11:52:19 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.992 00:05:42.992 11:52:19 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:42.992 11:52:19 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:42.992 11:52:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.992 11:52:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.992 11:52:19 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:42.992 11:52:19 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:42.992 11:52:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.992 11:52:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.992 11:52:19 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:42.992 11:52:19 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:42.992 11:52:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:46.281 11:52:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.281 11:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:46.281 11:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@51 -- # sort 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:46.281 11:52:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.281 11:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:46.281 11:52:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.281 11:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:46.281 11:52:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.281 11:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.540 MallocForNvmf0 00:05:46.540 11:52:23 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.540 11:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.799 MallocForNvmf1 00:05:46.799 11:52:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.799 11:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:47.058 [2024-07-25 11:52:24.198340] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.058 11:52:24 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.058 11:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.317 11:52:24 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.317 11:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.575 11:52:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.575 11:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.834 11:52:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.834 11:52:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:48.093 [2024-07-25 11:52:25.193554] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:48.093 11:52:25 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:48.093 11:52:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.093 11:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.093 11:52:25 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:48.093 11:52:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.093 11:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.093 11:52:25 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:48.093 11:52:25 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:48.093 11:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:48.352 MallocBdevForConfigChangeCheck 00:05:48.352 11:52:25 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:48.352 11:52:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.352 11:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.352 11:52:25 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:48.352 11:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.920 11:52:25 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:48.920 INFO: shutting down applications... 00:05:48.920 11:52:25 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:48.920 11:52:25 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:48.920 11:52:25 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:48.920 11:52:25 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.294 Calling clear_iscsi_subsystem 00:05:50.294 Calling clear_nvmf_subsystem 00:05:50.294 Calling clear_nbd_subsystem 00:05:50.294 Calling clear_ublk_subsystem 00:05:50.294 Calling clear_vhost_blk_subsystem 00:05:50.294 Calling clear_vhost_scsi_subsystem 00:05:50.294 Calling clear_bdev_subsystem 00:05:50.294 11:52:27 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:50.294 11:52:27 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:50.294 11:52:27 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:50.294 11:52:27 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.294 11:52:27 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.294 11:52:27 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.860 11:52:27 json_config -- json_config/json_config.sh@349 -- # break 00:05:50.860 11:52:27 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:50.860 11:52:27 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:50.860 11:52:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:50.860 11:52:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.860 11:52:27 json_config -- json_config/common.sh@35 -- # [[ -n 3934987 ]] 00:05:50.860 11:52:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3934987 00:05:50.860 11:52:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.860 11:52:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.860 11:52:27 json_config -- json_config/common.sh@41 -- # kill -0 3934987 00:05:50.860 11:52:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.429 11:52:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.429 11:52:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.429 11:52:28 json_config -- json_config/common.sh@41 -- # kill -0 3934987 00:05:51.429 11:52:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.429 11:52:28 json_config -- json_config/common.sh@43 -- # break 00:05:51.429 11:52:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.429 11:52:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.429 SPDK target shutdown done 00:05:51.429 11:52:28 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:51.429 INFO: relaunching applications... 00:05:51.429 11:52:28 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.429 11:52:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:51.429 11:52:28 json_config -- json_config/common.sh@10 -- # shift 00:05:51.429 11:52:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.429 11:52:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.429 11:52:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.429 11:52:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.429 11:52:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.429 11:52:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3936806 00:05:51.429 11:52:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.429 Waiting for target to run... 00:05:51.429 11:52:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.429 11:52:28 json_config -- json_config/common.sh@25 -- # waitforlisten 3936806 /var/tmp/spdk_tgt.sock 00:05:51.429 11:52:28 json_config -- common/autotest_common.sh@831 -- # '[' -z 3936806 ']' 00:05:51.429 11:52:28 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.429 11:52:28 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.429 11:52:28 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.429 11:52:28 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.430 11:52:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.430 [2024-07-25 11:52:28.556404] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:51.430 [2024-07-25 11:52:28.556479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936806 ] 00:05:51.430 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.998 [2024-07-25 11:52:29.013253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.998 [2024-07-25 11:52:29.112819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.295 [2024-07-25 11:52:32.159941] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.295 [2024-07-25 11:52:32.192287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.295 11:52:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.295 11:52:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:55.295 11:52:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:55.295 00:05:55.295 11:52:32 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:55.295 11:52:32 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:55.295 INFO: Checking if target configuration is the same... 00:05:55.295 11:52:32 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.295 11:52:32 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:55.295 11:52:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.295 + '[' 2 -ne 2 ']' 00:05:55.295 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.295 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.295 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.295 +++ basename /dev/fd/62 00:05:55.295 ++ mktemp /tmp/62.XXX 00:05:55.295 + tmp_file_1=/tmp/62.XV9 00:05:55.295 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.295 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.295 + tmp_file_2=/tmp/spdk_tgt_config.json.6BH 00:05:55.295 + ret=0 00:05:55.295 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.554 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.554 + diff -u /tmp/62.XV9 /tmp/spdk_tgt_config.json.6BH 00:05:55.554 + echo 'INFO: JSON config files are the same' 00:05:55.554 INFO: JSON config files are the same 00:05:55.554 + rm /tmp/62.XV9 /tmp/spdk_tgt_config.json.6BH 00:05:55.554 + exit 0 00:05:55.554 11:52:32 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:55.554 11:52:32 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:55.554 INFO: changing configuration and checking if this can be detected... 00:05:55.554 11:52:32 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.554 11:52:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.813 11:52:32 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.813 11:52:32 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:55.813 11:52:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.813 + '[' 2 -ne 2 ']' 00:05:55.813 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.813 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.813 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.813 +++ basename /dev/fd/62 00:05:55.813 ++ mktemp /tmp/62.XXX 00:05:55.813 + tmp_file_1=/tmp/62.RVx 00:05:55.813 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.813 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.813 + tmp_file_2=/tmp/spdk_tgt_config.json.xxL 00:05:55.813 + ret=0 00:05:55.813 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.071 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.071 + diff -u /tmp/62.RVx /tmp/spdk_tgt_config.json.xxL 00:05:56.331 + ret=1 00:05:56.331 + echo '=== Start of file: /tmp/62.RVx ===' 00:05:56.331 + cat /tmp/62.RVx 00:05:56.331 + echo '=== End of file: /tmp/62.RVx ===' 00:05:56.331 + echo '' 00:05:56.331 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xxL ===' 00:05:56.331 + cat /tmp/spdk_tgt_config.json.xxL 00:05:56.331 + echo '=== End of file: /tmp/spdk_tgt_config.json.xxL ===' 00:05:56.331 + echo '' 00:05:56.331 + rm /tmp/62.RVx /tmp/spdk_tgt_config.json.xxL 00:05:56.331 + exit 1 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:56.331 INFO: configuration change detected. 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@321 -- # [[ -n 3936806 ]] 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.331 11:52:33 json_config -- json_config/json_config.sh@327 -- # killprocess 3936806 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@950 -- # '[' -z 3936806 ']' 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@954 -- # kill -0 3936806 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@955 -- # uname 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3936806 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3936806' 00:05:56.331 killing process with pid 3936806 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@969 -- # kill 3936806 00:05:56.331 11:52:33 json_config -- common/autotest_common.sh@974 -- # wait 3936806 00:05:58.238 11:52:35 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.238 11:52:35 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:58.238 11:52:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:58.238 11:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.238 11:52:35 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:58.238 11:52:35 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:58.238 INFO: Success 00:05:58.238 00:05:58.238 real 0m16.307s 00:05:58.238 user 0m18.391s 00:05:58.238 sys 0m2.115s 00:05:58.238 11:52:35 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.238 11:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.238 ************************************ 00:05:58.238 END TEST json_config 00:05:58.238 ************************************ 00:05:58.238 11:52:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:58.238 11:52:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.238 11:52:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.238 11:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:58.238 ************************************ 00:05:58.238 START TEST json_config_extra_key 00:05:58.238 ************************************ 00:05:58.238 11:52:35 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:58.238 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.238 11:52:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.238 11:52:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.238 11:52:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.238 11:52:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 11:52:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 11:52:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 11:52:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:58.238 11:52:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.238 11:52:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:58.239 INFO: launching applications... 00:05:58.239 11:52:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3938121 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.239 Waiting for target to run... 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3938121 /var/tmp/spdk_tgt.sock 00:05:58.239 11:52:35 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3938121 ']' 00:05:58.239 11:52:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.239 11:52:35 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.239 11:52:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:58.239 11:52:35 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.239 11:52:35 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.239 11:52:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.239 [2024-07-25 11:52:35.297317] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:58.239 [2024-07-25 11:52:35.297381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938121 ] 00:05:58.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.499 [2024-07-25 11:52:35.747089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.757 [2024-07-25 11:52:35.846294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.016 11:52:36 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.016 11:52:36 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:59.016 00:05:59.016 11:52:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:59.016 INFO: shutting down applications... 00:05:59.016 11:52:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3938121 ]] 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3938121 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3938121 00:05:59.016 11:52:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.583 11:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.583 11:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.583 11:52:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3938121 00:05:59.583 11:52:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.583 11:52:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.584 11:52:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.584 11:52:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.584 SPDK target shutdown done 00:05:59.584 11:52:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.584 Success 00:05:59.584 00:05:59.584 real 0m1.573s 00:05:59.584 user 0m1.343s 00:05:59.584 sys 0m0.560s 00:05:59.584 11:52:36 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.584 11:52:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.584 ************************************ 00:05:59.584 END TEST json_config_extra_key 00:05:59.584 ************************************ 00:05:59.584 11:52:36 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.584 11:52:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.584 11:52:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.584 11:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.584 ************************************ 00:05:59.584 START TEST alias_rpc 00:05:59.584 ************************************ 00:05:59.584 11:52:36 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.843 * Looking for test storage... 00:05:59.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:59.843 11:52:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.843 11:52:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3938444 00:05:59.843 11:52:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3938444 00:05:59.843 11:52:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.843 11:52:36 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3938444 ']' 00:05:59.843 11:52:36 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.843 11:52:36 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.843 11:52:36 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.843 11:52:36 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.843 11:52:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.843 [2024-07-25 11:52:36.957420] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:59.843 [2024-07-25 11:52:36.957490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938444 ] 00:05:59.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.843 [2024-07-25 11:52:37.039071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.843 [2024-07-25 11:52:37.130216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.780 11:52:37 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.780 11:52:37 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:00.780 11:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:01.040 11:52:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3938444 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3938444 ']' 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3938444 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3938444 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3938444' 00:06:01.040 killing process with pid 3938444 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@969 -- # kill 3938444 00:06:01.040 11:52:38 alias_rpc -- common/autotest_common.sh@974 -- # wait 3938444 00:06:01.300 00:06:01.300 real 0m1.765s 00:06:01.300 user 0m2.063s 00:06:01.300 sys 0m0.488s 00:06:01.300 11:52:38 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.300 11:52:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.300 ************************************ 00:06:01.300 END TEST alias_rpc 00:06:01.300 ************************************ 00:06:01.558 11:52:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:01.558 11:52:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:01.558 11:52:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.558 11:52:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.558 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:01.558 ************************************ 00:06:01.558 START TEST spdkcli_tcp 00:06:01.559 ************************************ 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:01.559 * Looking for test storage... 00:06:01.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3938972 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3938972 00:06:01.559 11:52:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3938972 ']' 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.559 11:52:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.559 [2024-07-25 11:52:38.805679] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:01.559 [2024-07-25 11:52:38.805744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938972 ] 00:06:01.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.818 [2024-07-25 11:52:38.886664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.818 [2024-07-25 11:52:38.979026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.818 [2024-07-25 11:52:38.979032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.755 11:52:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.755 11:52:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:02.755 11:52:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3939029 00:06:02.755 11:52:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:02.755 11:52:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.755 [ 00:06:02.755 "bdev_malloc_delete", 00:06:02.755 "bdev_malloc_create", 00:06:02.755 "bdev_null_resize", 00:06:02.755 "bdev_null_delete", 00:06:02.755 "bdev_null_create", 00:06:02.755 "bdev_nvme_cuse_unregister", 00:06:02.755 "bdev_nvme_cuse_register", 00:06:02.755 "bdev_opal_new_user", 00:06:02.755 "bdev_opal_set_lock_state", 00:06:02.755 "bdev_opal_delete", 00:06:02.755 "bdev_opal_get_info", 00:06:02.755 "bdev_opal_create", 00:06:02.755 "bdev_nvme_opal_revert", 00:06:02.755 "bdev_nvme_opal_init", 00:06:02.755 "bdev_nvme_send_cmd", 00:06:02.755 "bdev_nvme_get_path_iostat", 00:06:02.755 "bdev_nvme_get_mdns_discovery_info", 00:06:02.755 "bdev_nvme_stop_mdns_discovery", 00:06:02.755 "bdev_nvme_start_mdns_discovery", 00:06:02.755 "bdev_nvme_set_multipath_policy", 00:06:02.755 "bdev_nvme_set_preferred_path", 00:06:02.755 "bdev_nvme_get_io_paths", 00:06:02.755 "bdev_nvme_remove_error_injection", 00:06:02.755 "bdev_nvme_add_error_injection", 00:06:02.755 "bdev_nvme_get_discovery_info", 00:06:02.755 "bdev_nvme_stop_discovery", 00:06:02.755 "bdev_nvme_start_discovery", 00:06:02.755 "bdev_nvme_get_controller_health_info", 00:06:02.755 "bdev_nvme_disable_controller", 00:06:02.755 "bdev_nvme_enable_controller", 00:06:02.755 "bdev_nvme_reset_controller", 00:06:02.756 "bdev_nvme_get_transport_statistics", 00:06:02.756 "bdev_nvme_apply_firmware", 00:06:02.756 "bdev_nvme_detach_controller", 00:06:02.756 "bdev_nvme_get_controllers", 00:06:02.756 "bdev_nvme_attach_controller", 00:06:02.756 "bdev_nvme_set_hotplug", 00:06:02.756 "bdev_nvme_set_options", 00:06:02.756 "bdev_passthru_delete", 00:06:02.756 "bdev_passthru_create", 00:06:02.756 "bdev_lvol_set_parent_bdev", 00:06:02.756 "bdev_lvol_set_parent", 00:06:02.756 "bdev_lvol_check_shallow_copy", 00:06:02.756 "bdev_lvol_start_shallow_copy", 00:06:02.756 "bdev_lvol_grow_lvstore", 00:06:02.756 "bdev_lvol_get_lvols", 00:06:02.756 "bdev_lvol_get_lvstores", 00:06:02.756 "bdev_lvol_delete", 00:06:02.756 "bdev_lvol_set_read_only", 00:06:02.756 "bdev_lvol_resize", 00:06:02.756 "bdev_lvol_decouple_parent", 00:06:02.756 "bdev_lvol_inflate", 00:06:02.756 "bdev_lvol_rename", 00:06:02.756 "bdev_lvol_clone_bdev", 00:06:02.756 "bdev_lvol_clone", 00:06:02.756 "bdev_lvol_snapshot", 00:06:02.756 "bdev_lvol_create", 00:06:02.756 "bdev_lvol_delete_lvstore", 00:06:02.756 "bdev_lvol_rename_lvstore", 00:06:02.756 "bdev_lvol_create_lvstore", 00:06:02.756 "bdev_raid_set_options", 00:06:02.756 "bdev_raid_remove_base_bdev", 00:06:02.756 "bdev_raid_add_base_bdev", 00:06:02.756 "bdev_raid_delete", 00:06:02.756 "bdev_raid_create", 00:06:02.756 "bdev_raid_get_bdevs", 00:06:02.756 "bdev_error_inject_error", 00:06:02.756 "bdev_error_delete", 00:06:02.756 "bdev_error_create", 00:06:02.756 "bdev_split_delete", 00:06:02.756 "bdev_split_create", 00:06:02.756 "bdev_delay_delete", 00:06:02.756 "bdev_delay_create", 00:06:02.756 "bdev_delay_update_latency", 00:06:02.756 "bdev_zone_block_delete", 00:06:02.756 "bdev_zone_block_create", 00:06:02.756 "blobfs_create", 00:06:02.756 "blobfs_detect", 00:06:02.756 "blobfs_set_cache_size", 00:06:02.756 "bdev_aio_delete", 00:06:02.756 "bdev_aio_rescan", 00:06:02.756 "bdev_aio_create", 00:06:02.756 "bdev_ftl_set_property", 00:06:02.756 "bdev_ftl_get_properties", 00:06:02.756 "bdev_ftl_get_stats", 00:06:02.756 "bdev_ftl_unmap", 00:06:02.756 "bdev_ftl_unload", 00:06:02.756 "bdev_ftl_delete", 00:06:02.756 "bdev_ftl_load", 00:06:02.756 "bdev_ftl_create", 00:06:02.756 "bdev_virtio_attach_controller", 00:06:02.756 "bdev_virtio_scsi_get_devices", 00:06:02.756 "bdev_virtio_detach_controller", 00:06:02.756 "bdev_virtio_blk_set_hotplug", 00:06:02.756 "bdev_iscsi_delete", 00:06:02.756 "bdev_iscsi_create", 00:06:02.756 "bdev_iscsi_set_options", 00:06:02.756 "accel_error_inject_error", 00:06:02.756 "ioat_scan_accel_module", 00:06:02.756 "dsa_scan_accel_module", 00:06:02.756 "iaa_scan_accel_module", 00:06:02.756 "vfu_virtio_create_scsi_endpoint", 00:06:02.756 "vfu_virtio_scsi_remove_target", 00:06:02.756 "vfu_virtio_scsi_add_target", 00:06:02.756 "vfu_virtio_create_blk_endpoint", 00:06:02.756 "vfu_virtio_delete_endpoint", 00:06:02.756 "keyring_file_remove_key", 00:06:02.756 "keyring_file_add_key", 00:06:02.756 "keyring_linux_set_options", 00:06:02.756 "iscsi_get_histogram", 00:06:02.756 "iscsi_enable_histogram", 00:06:02.756 "iscsi_set_options", 00:06:02.756 "iscsi_get_auth_groups", 00:06:02.756 "iscsi_auth_group_remove_secret", 00:06:02.756 "iscsi_auth_group_add_secret", 00:06:02.756 "iscsi_delete_auth_group", 00:06:02.756 "iscsi_create_auth_group", 00:06:02.756 "iscsi_set_discovery_auth", 00:06:02.756 "iscsi_get_options", 00:06:02.756 "iscsi_target_node_request_logout", 00:06:02.756 "iscsi_target_node_set_redirect", 00:06:02.756 "iscsi_target_node_set_auth", 00:06:02.756 "iscsi_target_node_add_lun", 00:06:02.756 "iscsi_get_stats", 00:06:02.756 "iscsi_get_connections", 00:06:02.756 "iscsi_portal_group_set_auth", 00:06:02.756 "iscsi_start_portal_group", 00:06:02.756 "iscsi_delete_portal_group", 00:06:02.756 "iscsi_create_portal_group", 00:06:02.756 "iscsi_get_portal_groups", 00:06:02.756 "iscsi_delete_target_node", 00:06:02.756 "iscsi_target_node_remove_pg_ig_maps", 00:06:02.756 "iscsi_target_node_add_pg_ig_maps", 00:06:02.756 "iscsi_create_target_node", 00:06:02.756 "iscsi_get_target_nodes", 00:06:02.756 "iscsi_delete_initiator_group", 00:06:02.756 "iscsi_initiator_group_remove_initiators", 00:06:02.756 "iscsi_initiator_group_add_initiators", 00:06:02.756 "iscsi_create_initiator_group", 00:06:02.756 "iscsi_get_initiator_groups", 00:06:02.756 "nvmf_set_crdt", 00:06:02.756 "nvmf_set_config", 00:06:02.756 "nvmf_set_max_subsystems", 00:06:02.756 "nvmf_stop_mdns_prr", 00:06:02.756 "nvmf_publish_mdns_prr", 00:06:02.756 "nvmf_subsystem_get_listeners", 00:06:02.756 "nvmf_subsystem_get_qpairs", 00:06:02.756 "nvmf_subsystem_get_controllers", 00:06:02.756 "nvmf_get_stats", 00:06:02.756 "nvmf_get_transports", 00:06:02.756 "nvmf_create_transport", 00:06:02.756 "nvmf_get_targets", 00:06:02.756 "nvmf_delete_target", 00:06:02.756 "nvmf_create_target", 00:06:02.756 "nvmf_subsystem_allow_any_host", 00:06:02.756 "nvmf_subsystem_remove_host", 00:06:02.756 "nvmf_subsystem_add_host", 00:06:02.756 "nvmf_ns_remove_host", 00:06:02.756 "nvmf_ns_add_host", 00:06:02.756 "nvmf_subsystem_remove_ns", 00:06:02.756 "nvmf_subsystem_add_ns", 00:06:02.756 "nvmf_subsystem_listener_set_ana_state", 00:06:02.756 "nvmf_discovery_get_referrals", 00:06:02.756 "nvmf_discovery_remove_referral", 00:06:02.756 "nvmf_discovery_add_referral", 00:06:02.756 "nvmf_subsystem_remove_listener", 00:06:02.756 "nvmf_subsystem_add_listener", 00:06:02.756 "nvmf_delete_subsystem", 00:06:02.756 "nvmf_create_subsystem", 00:06:02.756 "nvmf_get_subsystems", 00:06:02.756 "env_dpdk_get_mem_stats", 00:06:02.756 "nbd_get_disks", 00:06:02.756 "nbd_stop_disk", 00:06:02.756 "nbd_start_disk", 00:06:02.756 "ublk_recover_disk", 00:06:02.756 "ublk_get_disks", 00:06:02.756 "ublk_stop_disk", 00:06:02.756 "ublk_start_disk", 00:06:02.756 "ublk_destroy_target", 00:06:02.756 "ublk_create_target", 00:06:02.756 "virtio_blk_create_transport", 00:06:02.756 "virtio_blk_get_transports", 00:06:02.756 "vhost_controller_set_coalescing", 00:06:02.756 "vhost_get_controllers", 00:06:02.756 "vhost_delete_controller", 00:06:02.756 "vhost_create_blk_controller", 00:06:02.756 "vhost_scsi_controller_remove_target", 00:06:02.756 "vhost_scsi_controller_add_target", 00:06:02.756 "vhost_start_scsi_controller", 00:06:02.756 "vhost_create_scsi_controller", 00:06:02.756 "thread_set_cpumask", 00:06:02.756 "framework_get_governor", 00:06:02.756 "framework_get_scheduler", 00:06:02.756 "framework_set_scheduler", 00:06:02.756 "framework_get_reactors", 00:06:02.756 "thread_get_io_channels", 00:06:02.756 "thread_get_pollers", 00:06:02.756 "thread_get_stats", 00:06:02.756 "framework_monitor_context_switch", 00:06:02.756 "spdk_kill_instance", 00:06:02.756 "log_enable_timestamps", 00:06:02.756 "log_get_flags", 00:06:02.756 "log_clear_flag", 00:06:02.756 "log_set_flag", 00:06:02.756 "log_get_level", 00:06:02.756 "log_set_level", 00:06:02.756 "log_get_print_level", 00:06:02.756 "log_set_print_level", 00:06:02.756 "framework_enable_cpumask_locks", 00:06:02.756 "framework_disable_cpumask_locks", 00:06:02.756 "framework_wait_init", 00:06:02.756 "framework_start_init", 00:06:02.756 "scsi_get_devices", 00:06:02.756 "bdev_get_histogram", 00:06:02.756 "bdev_enable_histogram", 00:06:02.756 "bdev_set_qos_limit", 00:06:02.756 "bdev_set_qd_sampling_period", 00:06:02.756 "bdev_get_bdevs", 00:06:02.756 "bdev_reset_iostat", 00:06:02.756 "bdev_get_iostat", 00:06:02.756 "bdev_examine", 00:06:02.756 "bdev_wait_for_examine", 00:06:02.756 "bdev_set_options", 00:06:02.756 "notify_get_notifications", 00:06:02.756 "notify_get_types", 00:06:02.756 "accel_get_stats", 00:06:02.756 "accel_set_options", 00:06:02.756 "accel_set_driver", 00:06:02.756 "accel_crypto_key_destroy", 00:06:02.757 "accel_crypto_keys_get", 00:06:02.757 "accel_crypto_key_create", 00:06:02.757 "accel_assign_opc", 00:06:02.757 "accel_get_module_info", 00:06:02.757 "accel_get_opc_assignments", 00:06:02.757 "vmd_rescan", 00:06:02.757 "vmd_remove_device", 00:06:02.757 "vmd_enable", 00:06:02.757 "sock_get_default_impl", 00:06:02.757 "sock_set_default_impl", 00:06:02.757 "sock_impl_set_options", 00:06:02.757 "sock_impl_get_options", 00:06:02.757 "iobuf_get_stats", 00:06:02.757 "iobuf_set_options", 00:06:02.757 "keyring_get_keys", 00:06:02.757 "framework_get_pci_devices", 00:06:02.757 "framework_get_config", 00:06:02.757 "framework_get_subsystems", 00:06:02.757 "vfu_tgt_set_base_path", 00:06:02.757 "trace_get_info", 00:06:02.757 "trace_get_tpoint_group_mask", 00:06:02.757 "trace_disable_tpoint_group", 00:06:02.757 "trace_enable_tpoint_group", 00:06:02.757 "trace_clear_tpoint_mask", 00:06:02.757 "trace_set_tpoint_mask", 00:06:02.757 "spdk_get_version", 00:06:02.757 "rpc_get_methods" 00:06:02.757 ] 00:06:02.757 11:52:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:02.757 11:52:40 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.757 11:52:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.757 11:52:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:02.757 11:52:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3938972 00:06:02.757 11:52:40 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3938972 ']' 00:06:02.757 11:52:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3938972 00:06:02.757 11:52:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:02.757 11:52:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.757 11:52:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3938972 00:06:03.016 11:52:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.016 11:52:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.016 11:52:40 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3938972' 00:06:03.016 killing process with pid 3938972 00:06:03.016 11:52:40 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3938972 00:06:03.016 11:52:40 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3938972 00:06:03.276 00:06:03.276 real 0m1.788s 00:06:03.276 user 0m3.449s 00:06:03.276 sys 0m0.517s 00:06:03.276 11:52:40 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.276 11:52:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.276 ************************************ 00:06:03.276 END TEST spdkcli_tcp 00:06:03.276 ************************************ 00:06:03.276 11:52:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.276 11:52:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.276 11:52:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.276 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:03.276 ************************************ 00:06:03.276 START TEST dpdk_mem_utility 00:06:03.276 ************************************ 00:06:03.276 11:52:40 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.276 * Looking for test storage... 00:06:03.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:03.535 11:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.535 11:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3939341 00:06:03.535 11:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3939341 00:06:03.535 11:52:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.535 11:52:40 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3939341 ']' 00:06:03.535 11:52:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.535 11:52:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.535 11:52:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.535 11:52:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.535 11:52:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.535 [2024-07-25 11:52:40.643531] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:03.535 [2024-07-25 11:52:40.643595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939341 ] 00:06:03.535 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.535 [2024-07-25 11:52:40.724975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.535 [2024-07-25 11:52:40.814116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.509 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.509 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:04.509 11:52:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.509 11:52:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.509 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.509 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.509 { 00:06:04.509 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.509 } 00:06:04.509 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.509 11:52:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:04.509 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:04.509 1 heaps totaling size 814.000000 MiB 00:06:04.509 size: 814.000000 MiB heap id: 0 00:06:04.509 end heaps---------- 00:06:04.509 8 mempools totaling size 598.116089 MiB 00:06:04.509 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.509 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.509 size: 84.521057 MiB name: bdev_io_3939341 00:06:04.509 size: 51.011292 MiB name: evtpool_3939341 00:06:04.509 size: 50.003479 MiB name: msgpool_3939341 00:06:04.509 size: 21.763794 MiB name: PDU_Pool 00:06:04.509 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.509 size: 0.026123 MiB name: Session_Pool 00:06:04.509 end mempools------- 00:06:04.509 6 memzones totaling size 4.142822 MiB 00:06:04.509 size: 1.000366 MiB name: RG_ring_0_3939341 00:06:04.509 size: 1.000366 MiB name: RG_ring_1_3939341 00:06:04.509 size: 1.000366 MiB name: RG_ring_4_3939341 00:06:04.509 size: 1.000366 MiB name: RG_ring_5_3939341 00:06:04.509 size: 0.125366 MiB name: RG_ring_2_3939341 00:06:04.509 size: 0.015991 MiB name: RG_ring_3_3939341 00:06:04.509 end memzones------- 00:06:04.509 11:52:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:04.509 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:04.509 list of free elements. size: 12.519348 MiB 00:06:04.509 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:04.509 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:04.509 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:04.509 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:04.509 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:04.509 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:04.509 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:04.509 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:04.509 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:04.509 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:04.509 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:04.509 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:04.509 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:04.509 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:04.509 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:04.509 list of standard malloc elements. size: 199.218079 MiB 00:06:04.509 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:04.509 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:04.509 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:04.509 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:04.509 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:04.509 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:04.509 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:04.509 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:04.509 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:04.509 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:04.509 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:04.509 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:04.509 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:04.509 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:04.509 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:04.509 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:04.509 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:04.509 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:04.509 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:04.509 list of memzone associated elements. size: 602.262573 MiB 00:06:04.509 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:04.509 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:04.509 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:04.509 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:04.509 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:04.509 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3939341_0 00:06:04.509 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:04.510 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3939341_0 00:06:04.510 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:04.510 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3939341_0 00:06:04.510 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:04.510 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:04.510 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:04.510 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:04.510 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:04.510 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3939341 00:06:04.510 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:04.510 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3939341 00:06:04.510 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:04.510 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3939341 00:06:04.510 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:04.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:04.510 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:04.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:04.510 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:04.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:04.510 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:04.510 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:04.510 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:04.510 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3939341 00:06:04.510 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:04.510 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3939341 00:06:04.510 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:04.510 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3939341 00:06:04.510 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:04.510 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3939341 00:06:04.510 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:04.510 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3939341 00:06:04.510 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:04.510 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:04.510 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:04.510 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:04.510 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:04.510 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:04.510 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:04.510 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3939341 00:06:04.510 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:04.510 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:04.510 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:04.510 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:04.510 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:04.510 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3939341 00:06:04.510 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:04.510 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:04.510 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:04.510 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3939341 00:06:04.510 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:04.510 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3939341 00:06:04.510 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:04.510 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:04.510 11:52:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:04.510 11:52:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3939341 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3939341 ']' 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3939341 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3939341 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3939341' 00:06:04.510 killing process with pid 3939341 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3939341 00:06:04.510 11:52:41 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3939341 00:06:05.079 00:06:05.079 real 0m1.615s 00:06:05.079 user 0m1.838s 00:06:05.079 sys 0m0.430s 00:06:05.079 11:52:42 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.079 11:52:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.079 ************************************ 00:06:05.079 END TEST dpdk_mem_utility 00:06:05.079 ************************************ 00:06:05.079 11:52:42 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.079 11:52:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.079 11:52:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.079 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:05.079 ************************************ 00:06:05.079 START TEST event 00:06:05.079 ************************************ 00:06:05.079 11:52:42 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.079 * Looking for test storage... 00:06:05.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.079 11:52:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:05.079 11:52:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.079 11:52:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.079 11:52:42 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:05.079 11:52:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.079 11:52:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.079 ************************************ 00:06:05.079 START TEST event_perf 00:06:05.079 ************************************ 00:06:05.079 11:52:42 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.080 Running I/O for 1 seconds...[2024-07-25 11:52:42.324853] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:05.080 [2024-07-25 11:52:42.324919] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939673 ] 00:06:05.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.339 [2024-07-25 11:52:42.405601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.339 [2024-07-25 11:52:42.498357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.339 [2024-07-25 11:52:42.498469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.339 [2024-07-25 11:52:42.498593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.339 [2024-07-25 11:52:42.498594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.275 Running I/O for 1 seconds... 00:06:06.275 lcore 0: 102089 00:06:06.275 lcore 1: 102092 00:06:06.275 lcore 2: 102094 00:06:06.275 lcore 3: 102093 00:06:06.275 done. 00:06:06.534 00:06:06.534 real 0m1.275s 00:06:06.534 user 0m4.169s 00:06:06.534 sys 0m0.097s 00:06:06.534 11:52:43 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.534 11:52:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.534 ************************************ 00:06:06.534 END TEST event_perf 00:06:06.534 ************************************ 00:06:06.534 11:52:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:06.534 11:52:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:06.534 11:52:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.534 11:52:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.534 ************************************ 00:06:06.534 START TEST event_reactor 00:06:06.534 ************************************ 00:06:06.534 11:52:43 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:06.534 [2024-07-25 11:52:43.664710] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:06.534 [2024-07-25 11:52:43.664775] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939955 ] 00:06:06.534 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.534 [2024-07-25 11:52:43.746046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.534 [2024-07-25 11:52:43.832862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.912 test_start 00:06:07.912 oneshot 00:06:07.912 tick 100 00:06:07.912 tick 100 00:06:07.912 tick 250 00:06:07.912 tick 100 00:06:07.912 tick 100 00:06:07.912 tick 250 00:06:07.912 tick 100 00:06:07.912 tick 500 00:06:07.912 tick 100 00:06:07.912 tick 100 00:06:07.912 tick 250 00:06:07.912 tick 100 00:06:07.912 tick 100 00:06:07.912 test_end 00:06:07.912 00:06:07.912 real 0m1.265s 00:06:07.912 user 0m1.175s 00:06:07.912 sys 0m0.085s 00:06:07.912 11:52:44 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.912 11:52:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:07.912 ************************************ 00:06:07.912 END TEST event_reactor 00:06:07.912 ************************************ 00:06:07.912 11:52:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.912 11:52:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:07.912 11:52:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.912 11:52:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.912 ************************************ 00:06:07.913 START TEST event_reactor_perf 00:06:07.913 ************************************ 00:06:07.913 11:52:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.913 [2024-07-25 11:52:45.000331] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:07.913 [2024-07-25 11:52:45.000398] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940241 ] 00:06:07.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.913 [2024-07-25 11:52:45.080139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.913 [2024-07-25 11:52:45.166056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.290 test_start 00:06:09.290 test_end 00:06:09.290 Performance: 309062 events per second 00:06:09.290 00:06:09.290 real 0m1.263s 00:06:09.290 user 0m1.168s 00:06:09.290 sys 0m0.089s 00:06:09.290 11:52:46 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.290 11:52:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.290 ************************************ 00:06:09.290 END TEST event_reactor_perf 00:06:09.290 ************************************ 00:06:09.290 11:52:46 event -- event/event.sh@49 -- # uname -s 00:06:09.290 11:52:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:09.290 11:52:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:09.290 11:52:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.290 11:52:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.291 11:52:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.291 ************************************ 00:06:09.291 START TEST event_scheduler 00:06:09.291 ************************************ 00:06:09.291 11:52:46 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:09.291 * Looking for test storage... 00:06:09.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:09.291 11:52:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:09.291 11:52:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3940545 00:06:09.291 11:52:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.291 11:52:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:09.291 11:52:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3940545 00:06:09.291 11:52:46 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3940545 ']' 00:06:09.291 11:52:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.291 11:52:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.291 11:52:46 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.291 11:52:46 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.291 11:52:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.291 [2024-07-25 11:52:46.449921] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:09.291 [2024-07-25 11:52:46.449984] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940545 ] 00:06:09.291 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.291 [2024-07-25 11:52:46.562661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.550 [2024-07-25 11:52:46.719122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.550 [2024-07-25 11:52:46.719219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.550 [2024-07-25 11:52:46.719333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.550 [2024-07-25 11:52:46.719343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.118 11:52:47 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.118 11:52:47 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:10.118 11:52:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.118 11:52:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.118 11:52:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.119 [2024-07-25 11:52:47.406713] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:10.119 [2024-07-25 11:52:47.406758] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.119 [2024-07-25 11:52:47.406784] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.119 [2024-07-25 11:52:47.406801] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.119 [2024-07-25 11:52:47.406817] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.119 11:52:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.119 11:52:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.119 11:52:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.119 11:52:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 [2024-07-25 11:52:47.518966] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.379 11:52:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.379 11:52:47 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.379 11:52:47 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 ************************************ 00:06:10.379 START TEST scheduler_create_thread 00:06:10.379 ************************************ 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 2 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 3 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 4 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 5 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 6 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 7 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 8 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 9 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 10 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.379 11:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.948 11:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.948 11:52:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.948 11:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.948 11:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.329 11:52:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.329 11:52:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:12.329 11:52:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:12.329 11:52:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.329 11:52:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.707 11:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.707 00:06:13.707 real 0m3.105s 00:06:13.707 user 0m0.022s 00:06:13.707 sys 0m0.008s 00:06:13.707 11:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.707 11:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.707 ************************************ 00:06:13.707 END TEST scheduler_create_thread 00:06:13.707 ************************************ 00:06:13.707 11:52:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:13.707 11:52:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3940545 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3940545 ']' 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3940545 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3940545 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3940545' 00:06:13.707 killing process with pid 3940545 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3940545 00:06:13.707 11:52:50 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3940545 00:06:13.967 [2024-07-25 11:52:51.041016] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.226 00:06:14.226 real 0m5.049s 00:06:14.226 user 0m9.725s 00:06:14.226 sys 0m0.446s 00:06:14.226 11:52:51 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.226 11:52:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.226 ************************************ 00:06:14.226 END TEST event_scheduler 00:06:14.226 ************************************ 00:06:14.226 11:52:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.226 11:52:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.226 11:52:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.226 11:52:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.226 11:52:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.226 ************************************ 00:06:14.226 START TEST app_repeat 00:06:14.226 ************************************ 00:06:14.226 11:52:51 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3941409 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3941409' 00:06:14.226 Process app_repeat pid: 3941409 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:14.226 spdk_app_start Round 0 00:06:14.226 11:52:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3941409 /var/tmp/spdk-nbd.sock 00:06:14.226 11:52:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3941409 ']' 00:06:14.226 11:52:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.226 11:52:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.227 11:52:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.227 11:52:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.227 11:52:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.227 [2024-07-25 11:52:51.481921] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:14.227 [2024-07-25 11:52:51.481984] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3941409 ] 00:06:14.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.486 [2024-07-25 11:52:51.563485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.486 [2024-07-25 11:52:51.658205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.486 [2024-07-25 11:52:51.658211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.486 11:52:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.486 11:52:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:14.486 11:52:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.744 Malloc0 00:06:14.744 11:52:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.003 Malloc1 00:06:15.003 11:52:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.003 11:52:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.298 /dev/nbd0 00:06:15.298 11:52:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.298 11:52:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.298 1+0 records in 00:06:15.298 1+0 records out 00:06:15.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191558 s, 21.4 MB/s 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.298 11:52:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.298 11:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.298 11:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.298 11:52:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.558 /dev/nbd1 00:06:15.558 11:52:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.558 11:52:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.558 1+0 records in 00:06:15.558 1+0 records out 00:06:15.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189582 s, 21.6 MB/s 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.558 11:52:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.558 11:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.558 11:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.558 11:52:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.558 11:52:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.558 11:52:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.817 11:52:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.817 { 00:06:15.817 "nbd_device": "/dev/nbd0", 00:06:15.818 "bdev_name": "Malloc0" 00:06:15.818 }, 00:06:15.818 { 00:06:15.818 "nbd_device": "/dev/nbd1", 00:06:15.818 "bdev_name": "Malloc1" 00:06:15.818 } 00:06:15.818 ]' 00:06:15.818 11:52:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.818 { 00:06:15.818 "nbd_device": "/dev/nbd0", 00:06:15.818 "bdev_name": "Malloc0" 00:06:15.818 }, 00:06:15.818 { 00:06:15.818 "nbd_device": "/dev/nbd1", 00:06:15.818 "bdev_name": "Malloc1" 00:06:15.818 } 00:06:15.818 ]' 00:06:15.818 11:52:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.077 /dev/nbd1' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.077 /dev/nbd1' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.077 256+0 records in 00:06:16.077 256+0 records out 00:06:16.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102834 s, 102 MB/s 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.077 256+0 records in 00:06:16.077 256+0 records out 00:06:16.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199509 s, 52.6 MB/s 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.077 256+0 records in 00:06:16.077 256+0 records out 00:06:16.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021174 s, 49.5 MB/s 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.077 11:52:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.336 11:52:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.595 11:52:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.854 11:52:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.854 11:52:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.113 11:52:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.373 [2024-07-25 11:52:54.597679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.632 [2024-07-25 11:52:54.679520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.632 [2024-07-25 11:52:54.679525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.632 [2024-07-25 11:52:54.724010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.632 [2024-07-25 11:52:54.724068] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.170 11:52:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.170 11:52:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:20.170 spdk_app_start Round 1 00:06:20.170 11:52:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3941409 /var/tmp/spdk-nbd.sock 00:06:20.170 11:52:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3941409 ']' 00:06:20.170 11:52:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.170 11:52:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.170 11:52:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.170 11:52:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.170 11:52:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.430 11:52:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.430 11:52:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.430 11:52:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.688 Malloc0 00:06:20.688 11:52:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.947 Malloc1 00:06:20.947 11:52:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.947 11:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.206 /dev/nbd0 00:06:21.206 11:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.206 11:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.206 1+0 records in 00:06:21.206 1+0 records out 00:06:21.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231783 s, 17.7 MB/s 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.206 11:52:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.206 11:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.206 11:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.206 11:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.465 /dev/nbd1 00:06:21.465 11:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.465 11:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.465 1+0 records in 00:06:21.465 1+0 records out 00:06:21.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197141 s, 20.8 MB/s 00:06:21.465 11:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.724 11:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.724 11:52:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.724 11:52:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.724 11:52:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.724 11:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.724 11:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.724 11:52:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.724 11:52:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.724 11:52:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.983 { 00:06:21.983 "nbd_device": "/dev/nbd0", 00:06:21.983 "bdev_name": "Malloc0" 00:06:21.983 }, 00:06:21.983 { 00:06:21.983 "nbd_device": "/dev/nbd1", 00:06:21.983 "bdev_name": "Malloc1" 00:06:21.983 } 00:06:21.983 ]' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.983 { 00:06:21.983 "nbd_device": "/dev/nbd0", 00:06:21.983 "bdev_name": "Malloc0" 00:06:21.983 }, 00:06:21.983 { 00:06:21.983 "nbd_device": "/dev/nbd1", 00:06:21.983 "bdev_name": "Malloc1" 00:06:21.983 } 00:06:21.983 ]' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.983 /dev/nbd1' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.983 /dev/nbd1' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.983 256+0 records in 00:06:21.983 256+0 records out 00:06:21.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010068 s, 104 MB/s 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.983 256+0 records in 00:06:21.983 256+0 records out 00:06:21.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197977 s, 53.0 MB/s 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.983 256+0 records in 00:06:21.983 256+0 records out 00:06:21.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209293 s, 50.1 MB/s 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.983 11:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.984 11:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.243 11:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.501 11:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.502 11:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.761 11:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.761 11:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.761 11:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.761 11:53:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.761 11:53:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.020 11:53:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.279 [2024-07-25 11:53:00.506257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.539 [2024-07-25 11:53:00.591804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.539 [2024-07-25 11:53:00.591809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.539 [2024-07-25 11:53:00.636905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.539 [2024-07-25 11:53:00.636945] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.076 11:53:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.076 11:53:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:26.076 spdk_app_start Round 2 00:06:26.076 11:53:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3941409 /var/tmp/spdk-nbd.sock 00:06:26.076 11:53:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3941409 ']' 00:06:26.076 11:53:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.077 11:53:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.077 11:53:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.077 11:53:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.077 11:53:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.337 11:53:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.337 11:53:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:26.337 11:53:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.645 Malloc0 00:06:26.645 11:53:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.904 Malloc1 00:06:26.904 11:53:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.904 11:53:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.167 /dev/nbd0 00:06:27.167 11:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.167 11:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.167 11:53:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:27.167 11:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.167 11:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.167 11:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.167 11:53:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.168 1+0 records in 00:06:27.168 1+0 records out 00:06:27.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020002 s, 20.5 MB/s 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.168 11:53:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.168 11:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.168 11:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.168 11:53:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.440 /dev/nbd1 00:06:27.440 11:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.440 11:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.440 1+0 records in 00:06:27.440 1+0 records out 00:06:27.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253508 s, 16.2 MB/s 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.440 11:53:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.440 11:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.440 11:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.440 11:53:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.440 11:53:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.440 11:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.700 { 00:06:27.700 "nbd_device": "/dev/nbd0", 00:06:27.700 "bdev_name": "Malloc0" 00:06:27.700 }, 00:06:27.700 { 00:06:27.700 "nbd_device": "/dev/nbd1", 00:06:27.700 "bdev_name": "Malloc1" 00:06:27.700 } 00:06:27.700 ]' 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.700 { 00:06:27.700 "nbd_device": "/dev/nbd0", 00:06:27.700 "bdev_name": "Malloc0" 00:06:27.700 }, 00:06:27.700 { 00:06:27.700 "nbd_device": "/dev/nbd1", 00:06:27.700 "bdev_name": "Malloc1" 00:06:27.700 } 00:06:27.700 ]' 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.700 /dev/nbd1' 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.700 /dev/nbd1' 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.700 11:53:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.960 256+0 records in 00:06:27.960 256+0 records out 00:06:27.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00976902 s, 107 MB/s 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.960 256+0 records in 00:06:27.960 256+0 records out 00:06:27.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198275 s, 52.9 MB/s 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.960 256+0 records in 00:06:27.960 256+0 records out 00:06:27.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210311 s, 49.9 MB/s 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.960 11:53:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.220 11:53:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.479 11:53:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.739 11:53:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.739 11:53:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.998 11:53:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.259 [2024-07-25 11:53:06.422358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.259 [2024-07-25 11:53:06.504004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.259 [2024-07-25 11:53:06.504010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.259 [2024-07-25 11:53:06.549252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.259 [2024-07-25 11:53:06.549295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.551 11:53:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3941409 /var/tmp/spdk-nbd.sock 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3941409 ']' 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:32.551 11:53:09 event.app_repeat -- event/event.sh@39 -- # killprocess 3941409 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3941409 ']' 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3941409 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3941409 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3941409' 00:06:32.551 killing process with pid 3941409 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3941409 00:06:32.551 11:53:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3941409 00:06:32.551 spdk_app_start is called in Round 0. 00:06:32.551 Shutdown signal received, stop current app iteration 00:06:32.551 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:06:32.551 spdk_app_start is called in Round 1. 00:06:32.551 Shutdown signal received, stop current app iteration 00:06:32.551 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:06:32.551 spdk_app_start is called in Round 2. 00:06:32.551 Shutdown signal received, stop current app iteration 00:06:32.551 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:06:32.551 spdk_app_start is called in Round 3. 00:06:32.552 Shutdown signal received, stop current app iteration 00:06:32.552 11:53:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:32.552 11:53:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:32.552 00:06:32.552 real 0m18.278s 00:06:32.552 user 0m40.912s 00:06:32.552 sys 0m2.905s 00:06:32.552 11:53:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.552 11:53:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.552 ************************************ 00:06:32.552 END TEST app_repeat 00:06:32.552 ************************************ 00:06:32.552 11:53:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:32.552 11:53:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.552 11:53:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.552 11:53:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.552 11:53:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.552 ************************************ 00:06:32.552 START TEST cpu_locks 00:06:32.552 ************************************ 00:06:32.552 11:53:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.811 * Looking for test storage... 00:06:32.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:32.811 11:53:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:32.811 11:53:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:32.812 11:53:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:32.812 11:53:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:32.812 11:53:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.812 11:53:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.812 11:53:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.812 ************************************ 00:06:32.812 START TEST default_locks 00:06:32.812 ************************************ 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3945024 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3945024 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3945024 ']' 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.812 11:53:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.812 [2024-07-25 11:53:09.967189] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:32.812 [2024-07-25 11:53:09.967245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945024 ] 00:06:32.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.812 [2024-07-25 11:53:10.050070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.071 [2024-07-25 11:53:10.148734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.011 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.011 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:34.011 11:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3945024 00:06:34.011 11:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3945024 00:06:34.011 11:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.579 lslocks: write error 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3945024 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3945024 ']' 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3945024 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3945024 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3945024' 00:06:34.579 killing process with pid 3945024 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3945024 00:06:34.579 11:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3945024 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3945024 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3945024 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3945024 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3945024 ']' 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3945024) - No such process 00:06:34.838 ERROR: process (pid: 3945024) is no longer running 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.838 00:06:34.838 real 0m2.117s 00:06:34.838 user 0m2.517s 00:06:34.838 sys 0m0.684s 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.838 11:53:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 ************************************ 00:06:34.838 END TEST default_locks 00:06:34.838 ************************************ 00:06:34.838 11:53:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:34.838 11:53:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.838 11:53:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.838 11:53:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 ************************************ 00:06:34.838 START TEST default_locks_via_rpc 00:06:34.838 ************************************ 00:06:34.838 11:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:34.838 11:53:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3945569 00:06:34.838 11:53:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3945569 00:06:34.838 11:53:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.839 11:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3945569 ']' 00:06:34.839 11:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.839 11:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.839 11:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.839 11:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.839 11:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.098 [2024-07-25 11:53:12.147114] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:35.098 [2024-07-25 11:53:12.147155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945569 ] 00:06:35.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.098 [2024-07-25 11:53:12.217186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.098 [2024-07-25 11:53:12.301347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3945569 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3945569 00:06:36.036 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3945569 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3945569 ']' 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3945569 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3945569 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3945569' 00:06:36.602 killing process with pid 3945569 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3945569 00:06:36.602 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3945569 00:06:36.861 00:06:36.861 real 0m1.884s 00:06:36.861 user 0m2.028s 00:06:36.861 sys 0m0.622s 00:06:36.861 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.861 11:53:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.861 ************************************ 00:06:36.861 END TEST default_locks_via_rpc 00:06:36.861 ************************************ 00:06:36.861 11:53:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:36.861 11:53:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.861 11:53:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.861 11:53:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.861 ************************************ 00:06:36.861 START TEST non_locking_app_on_locked_coremask 00:06:36.861 ************************************ 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3945871 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3945871 /var/tmp/spdk.sock 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3945871 ']' 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.861 11:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.861 [2024-07-25 11:53:14.101109] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:36.861 [2024-07-25 11:53:14.101161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945871 ] 00:06:36.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.120 [2024-07-25 11:53:14.182202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.120 [2024-07-25 11:53:14.272169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3946136 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3946136 /var/tmp/spdk2.sock 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3946136 ']' 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.100 11:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.100 [2024-07-25 11:53:15.125187] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:38.100 [2024-07-25 11:53:15.125302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946136 ] 00:06:38.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.100 [2024-07-25 11:53:15.269857] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.100 [2024-07-25 11:53:15.269885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.359 [2024-07-25 11:53:15.445162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.295 11:53:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.295 11:53:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.295 11:53:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3945871 00:06:39.295 11:53:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3945871 00:06:39.295 11:53:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.231 lslocks: write error 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3945871 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3945871 ']' 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3945871 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3945871 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3945871' 00:06:40.231 killing process with pid 3945871 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3945871 00:06:40.231 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3945871 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3946136 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3946136 ']' 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3946136 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3946136 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3946136' 00:06:40.799 killing process with pid 3946136 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3946136 00:06:40.799 11:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3946136 00:06:41.058 00:06:41.058 real 0m4.278s 00:06:41.058 user 0m4.997s 00:06:41.058 sys 0m1.243s 00:06:41.058 11:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.058 11:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.058 ************************************ 00:06:41.058 END TEST non_locking_app_on_locked_coremask 00:06:41.058 ************************************ 00:06:41.058 11:53:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.058 11:53:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.058 11:53:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.058 11:53:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.317 ************************************ 00:06:41.317 START TEST locking_app_on_unlocked_coremask 00:06:41.317 ************************************ 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3946703 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3946703 /var/tmp/spdk.sock 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3946703 ']' 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.317 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.317 [2024-07-25 11:53:18.448656] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:41.318 [2024-07-25 11:53:18.448712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946703 ] 00:06:41.318 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.318 [2024-07-25 11:53:18.528617] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.318 [2024-07-25 11:53:18.528649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.318 [2024-07-25 11:53:18.611145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3946728 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3946728 /var/tmp/spdk2.sock 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3946728 ']' 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.576 11:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.835 [2024-07-25 11:53:18.880617] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:41.835 [2024-07-25 11:53:18.880680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946728 ] 00:06:41.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.835 [2024-07-25 11:53:18.988636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.094 [2024-07-25 11:53:19.162996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.662 11:53:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.662 11:53:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:42.662 11:53:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3946728 00:06:42.662 11:53:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3946728 00:06:42.662 11:53:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.231 lslocks: write error 00:06:43.231 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3946703 00:06:43.231 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3946703 ']' 00:06:43.231 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3946703 00:06:43.231 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.231 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.231 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3946703 00:06:43.231 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.232 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.232 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3946703' 00:06:43.232 killing process with pid 3946703 00:06:43.232 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3946703 00:06:43.232 11:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3946703 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3946728 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3946728 ']' 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3946728 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3946728 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3946728' 00:06:44.169 killing process with pid 3946728 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3946728 00:06:44.169 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3946728 00:06:44.429 00:06:44.429 real 0m3.133s 00:06:44.429 user 0m3.396s 00:06:44.429 sys 0m1.032s 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.429 ************************************ 00:06:44.429 END TEST locking_app_on_unlocked_coremask 00:06:44.429 ************************************ 00:06:44.429 11:53:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.429 11:53:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.429 11:53:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.429 11:53:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.429 ************************************ 00:06:44.429 START TEST locking_app_on_locked_coremask 00:06:44.429 ************************************ 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3947272 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3947272 /var/tmp/spdk.sock 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3947272 ']' 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.429 11:53:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.429 [2024-07-25 11:53:21.654401] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:44.429 [2024-07-25 11:53:21.654467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947272 ] 00:06:44.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.688 [2024-07-25 11:53:21.736609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.688 [2024-07-25 11:53:21.827579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3947534 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3947534 /var/tmp/spdk2.sock 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3947534 /var/tmp/spdk2.sock 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3947534 /var/tmp/spdk2.sock 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3947534 ']' 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.627 11:53:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.627 [2024-07-25 11:53:22.647134] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:45.627 [2024-07-25 11:53:22.647195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947534 ] 00:06:45.627 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.627 [2024-07-25 11:53:22.756131] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3947272 has claimed it. 00:06:45.627 [2024-07-25 11:53:22.756175] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3947534) - No such process 00:06:46.197 ERROR: process (pid: 3947534) is no longer running 00:06:46.197 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.197 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:46.197 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:46.198 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.198 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.198 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.198 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3947272 00:06:46.198 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3947272 00:06:46.198 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.460 lslocks: write error 00:06:46.460 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3947272 00:06:46.460 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3947272 ']' 00:06:46.460 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3947272 00:06:46.460 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.460 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.460 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3947272 00:06:46.719 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.719 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.719 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3947272' 00:06:46.719 killing process with pid 3947272 00:06:46.719 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3947272 00:06:46.719 11:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3947272 00:06:46.978 00:06:46.978 real 0m2.537s 00:06:46.978 user 0m2.922s 00:06:46.978 sys 0m0.699s 00:06:46.978 11:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.978 11:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.978 ************************************ 00:06:46.979 END TEST locking_app_on_locked_coremask 00:06:46.979 ************************************ 00:06:46.979 11:53:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:46.979 11:53:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.979 11:53:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.979 11:53:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.979 ************************************ 00:06:46.979 START TEST locking_overlapped_coremask 00:06:46.979 ************************************ 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3947824 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3947824 /var/tmp/spdk.sock 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3947824 ']' 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.979 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.979 [2024-07-25 11:53:24.262093] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:46.979 [2024-07-25 11:53:24.262153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947824 ] 00:06:47.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.238 [2024-07-25 11:53:24.343005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.238 [2024-07-25 11:53:24.429376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.238 [2024-07-25 11:53:24.429487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.238 [2024-07-25 11:53:24.429487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3947893 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3947893 /var/tmp/spdk2.sock 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3947893 /var/tmp/spdk2.sock 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3947893 /var/tmp/spdk2.sock 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3947893 ']' 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.497 11:53:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.497 [2024-07-25 11:53:24.712051] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:47.497 [2024-07-25 11:53:24.712111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947893 ] 00:06:47.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.759 [2024-07-25 11:53:24.903110] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3947824 has claimed it. 00:06:47.759 [2024-07-25 11:53:24.903196] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3947893) - No such process 00:06:48.356 ERROR: process (pid: 3947893) is no longer running 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3947824 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3947824 ']' 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3947824 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3947824 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3947824' 00:06:48.356 killing process with pid 3947824 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3947824 00:06:48.356 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3947824 00:06:48.615 00:06:48.615 real 0m1.600s 00:06:48.615 user 0m4.304s 00:06:48.615 sys 0m0.466s 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.615 ************************************ 00:06:48.615 END TEST locking_overlapped_coremask 00:06:48.615 ************************************ 00:06:48.615 11:53:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.615 11:53:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.615 11:53:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.615 11:53:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.615 ************************************ 00:06:48.615 START TEST locking_overlapped_coremask_via_rpc 00:06:48.615 ************************************ 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3948127 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3948127 /var/tmp/spdk.sock 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3948127 ']' 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.615 11:53:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.873 [2024-07-25 11:53:25.929817] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:48.873 [2024-07-25 11:53:25.929875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948127 ] 00:06:48.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.873 [2024-07-25 11:53:26.011338] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.873 [2024-07-25 11:53:26.011367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.873 [2024-07-25 11:53:26.100074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.873 [2024-07-25 11:53:26.100188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.873 [2024-07-25 11:53:26.100188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3948393 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3948393 /var/tmp/spdk2.sock 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3948393 ']' 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.810 11:53:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.810 [2024-07-25 11:53:26.847473] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:49.810 [2024-07-25 11:53:26.847535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948393 ] 00:06:49.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.810 [2024-07-25 11:53:27.037366] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.810 [2024-07-25 11:53:27.037423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.069 [2024-07-25 11:53:27.339683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.069 [2024-07-25 11:53:27.343658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.069 [2024-07-25 11:53:27.343663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.635 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.636 [2024-07-25 11:53:27.867804] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3948127 has claimed it. 00:06:50.636 request: 00:06:50.636 { 00:06:50.636 "method": "framework_enable_cpumask_locks", 00:06:50.636 "req_id": 1 00:06:50.636 } 00:06:50.636 Got JSON-RPC error response 00:06:50.636 response: 00:06:50.636 { 00:06:50.636 "code": -32603, 00:06:50.636 "message": "Failed to claim CPU core: 2" 00:06:50.636 } 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3948127 /var/tmp/spdk.sock 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3948127 ']' 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.636 11:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3948393 /var/tmp/spdk2.sock 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3948393 ']' 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.894 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.152 00:06:51.152 real 0m2.439s 00:06:51.152 user 0m1.120s 00:06:51.152 sys 0m0.189s 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.152 11:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.152 ************************************ 00:06:51.152 END TEST locking_overlapped_coremask_via_rpc 00:06:51.152 ************************************ 00:06:51.153 11:53:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.153 11:53:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3948127 ]] 00:06:51.153 11:53:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3948127 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3948127 ']' 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3948127 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3948127 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3948127' 00:06:51.153 killing process with pid 3948127 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3948127 00:06:51.153 11:53:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3948127 00:06:51.720 11:53:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3948393 ]] 00:06:51.720 11:53:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3948393 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3948393 ']' 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3948393 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3948393 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3948393' 00:06:51.720 killing process with pid 3948393 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3948393 00:06:51.720 11:53:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3948393 00:06:52.288 11:53:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.288 11:53:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.288 11:53:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3948127 ]] 00:06:52.288 11:53:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3948127 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3948127 ']' 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3948127 00:06:52.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3948127) - No such process 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3948127 is not found' 00:06:52.288 Process with pid 3948127 is not found 00:06:52.288 11:53:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3948393 ]] 00:06:52.288 11:53:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3948393 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3948393 ']' 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3948393 00:06:52.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3948393) - No such process 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3948393 is not found' 00:06:52.288 Process with pid 3948393 is not found 00:06:52.288 11:53:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.288 00:06:52.288 real 0m19.528s 00:06:52.288 user 0m33.159s 00:06:52.288 sys 0m5.978s 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.288 11:53:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.288 ************************************ 00:06:52.288 END TEST cpu_locks 00:06:52.288 ************************************ 00:06:52.288 00:06:52.288 real 0m47.180s 00:06:52.288 user 1m30.508s 00:06:52.288 sys 0m9.955s 00:06:52.288 11:53:29 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.288 11:53:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.288 ************************************ 00:06:52.288 END TEST event 00:06:52.288 ************************************ 00:06:52.288 11:53:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:52.288 11:53:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.288 11:53:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.288 11:53:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.288 ************************************ 00:06:52.288 START TEST thread 00:06:52.288 ************************************ 00:06:52.288 11:53:29 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:52.288 * Looking for test storage... 00:06:52.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:52.288 11:53:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.288 11:53:29 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:52.288 11:53:29 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.288 11:53:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.288 ************************************ 00:06:52.288 START TEST thread_poller_perf 00:06:52.288 ************************************ 00:06:52.288 11:53:29 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.288 [2024-07-25 11:53:29.581165] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:52.288 [2024-07-25 11:53:29.581237] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949012 ] 00:06:52.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.547 [2024-07-25 11:53:29.664138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.547 [2024-07-25 11:53:29.750992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.547 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:53.922 ====================================== 00:06:53.922 busy:2210225872 (cyc) 00:06:53.922 total_run_count: 255000 00:06:53.922 tsc_hz: 2200000000 (cyc) 00:06:53.922 ====================================== 00:06:53.922 poller_cost: 8667 (cyc), 3939 (nsec) 00:06:53.922 00:06:53.922 real 0m1.280s 00:06:53.922 user 0m1.180s 00:06:53.922 sys 0m0.093s 00:06:53.922 11:53:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.922 11:53:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.922 ************************************ 00:06:53.922 END TEST thread_poller_perf 00:06:53.922 ************************************ 00:06:53.922 11:53:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.922 11:53:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:53.922 11:53:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.922 11:53:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.922 ************************************ 00:06:53.922 START TEST thread_poller_perf 00:06:53.922 ************************************ 00:06:53.922 11:53:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.922 [2024-07-25 11:53:30.929403] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:53.922 [2024-07-25 11:53:30.929471] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949293 ] 00:06:53.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.922 [2024-07-25 11:53:31.012691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.922 [2024-07-25 11:53:31.100387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.922 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.296 ====================================== 00:06:55.296 busy:2202364660 (cyc) 00:06:55.296 total_run_count: 3379000 00:06:55.296 tsc_hz: 2200000000 (cyc) 00:06:55.296 ====================================== 00:06:55.296 poller_cost: 651 (cyc), 295 (nsec) 00:06:55.296 00:06:55.296 real 0m1.270s 00:06:55.296 user 0m1.169s 00:06:55.296 sys 0m0.095s 00:06:55.296 11:53:32 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.296 11:53:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.296 ************************************ 00:06:55.296 END TEST thread_poller_perf 00:06:55.296 ************************************ 00:06:55.296 11:53:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.296 00:06:55.296 real 0m2.780s 00:06:55.296 user 0m2.454s 00:06:55.296 sys 0m0.331s 00:06:55.296 11:53:32 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.296 11:53:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.296 ************************************ 00:06:55.296 END TEST thread 00:06:55.296 ************************************ 00:06:55.296 11:53:32 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:55.296 11:53:32 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:55.296 11:53:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.296 11:53:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.296 11:53:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.296 ************************************ 00:06:55.296 START TEST app_cmdline 00:06:55.296 ************************************ 00:06:55.296 11:53:32 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:55.296 * Looking for test storage... 00:06:55.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:55.296 11:53:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.296 11:53:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3949613 00:06:55.296 11:53:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3949613 00:06:55.296 11:53:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.296 11:53:32 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3949613 ']' 00:06:55.296 11:53:32 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.296 11:53:32 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.296 11:53:32 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.296 11:53:32 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.296 11:53:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.296 [2024-07-25 11:53:32.434378] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:55.296 [2024-07-25 11:53:32.434444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949613 ] 00:06:55.296 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.296 [2024-07-25 11:53:32.515456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.554 [2024-07-25 11:53:32.607325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.121 11:53:33 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.121 11:53:33 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:56.121 11:53:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:56.379 { 00:06:56.379 "version": "SPDK v24.09-pre git sha1 86fd5638b", 00:06:56.379 "fields": { 00:06:56.379 "major": 24, 00:06:56.379 "minor": 9, 00:06:56.379 "patch": 0, 00:06:56.379 "suffix": "-pre", 00:06:56.379 "commit": "86fd5638b" 00:06:56.379 } 00:06:56.379 } 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.379 11:53:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:56.379 11:53:33 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.639 request: 00:06:56.639 { 00:06:56.639 "method": "env_dpdk_get_mem_stats", 00:06:56.639 "req_id": 1 00:06:56.639 } 00:06:56.639 Got JSON-RPC error response 00:06:56.639 response: 00:06:56.639 { 00:06:56.639 "code": -32601, 00:06:56.639 "message": "Method not found" 00:06:56.639 } 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.639 11:53:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3949613 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3949613 ']' 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3949613 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.639 11:53:33 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3949613 00:06:56.898 11:53:33 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.898 11:53:33 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.898 11:53:33 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3949613' 00:06:56.898 killing process with pid 3949613 00:06:56.898 11:53:33 app_cmdline -- common/autotest_common.sh@969 -- # kill 3949613 00:06:56.898 11:53:33 app_cmdline -- common/autotest_common.sh@974 -- # wait 3949613 00:06:57.157 00:06:57.157 real 0m2.025s 00:06:57.157 user 0m2.594s 00:06:57.157 sys 0m0.500s 00:06:57.157 11:53:34 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.157 11:53:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 ************************************ 00:06:57.157 END TEST app_cmdline 00:06:57.157 ************************************ 00:06:57.157 11:53:34 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:57.157 11:53:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.157 11:53:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.157 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 ************************************ 00:06:57.157 START TEST version 00:06:57.157 ************************************ 00:06:57.157 11:53:34 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:57.417 * Looking for test storage... 00:06:57.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:57.417 11:53:34 version -- app/version.sh@17 -- # get_header_version major 00:06:57.417 11:53:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # cut -f2 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.417 11:53:34 version -- app/version.sh@17 -- # major=24 00:06:57.417 11:53:34 version -- app/version.sh@18 -- # get_header_version minor 00:06:57.417 11:53:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # cut -f2 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.417 11:53:34 version -- app/version.sh@18 -- # minor=9 00:06:57.417 11:53:34 version -- app/version.sh@19 -- # get_header_version patch 00:06:57.417 11:53:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # cut -f2 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.417 11:53:34 version -- app/version.sh@19 -- # patch=0 00:06:57.417 11:53:34 version -- app/version.sh@20 -- # get_header_version suffix 00:06:57.417 11:53:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # cut -f2 00:06:57.417 11:53:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.417 11:53:34 version -- app/version.sh@20 -- # suffix=-pre 00:06:57.417 11:53:34 version -- app/version.sh@22 -- # version=24.9 00:06:57.417 11:53:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:57.417 11:53:34 version -- app/version.sh@28 -- # version=24.9rc0 00:06:57.417 11:53:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.417 11:53:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:57.417 11:53:34 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:57.417 11:53:34 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:57.417 00:06:57.417 real 0m0.167s 00:06:57.417 user 0m0.094s 00:06:57.417 sys 0m0.111s 00:06:57.417 11:53:34 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.417 11:53:34 version -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 ************************************ 00:06:57.417 END TEST version 00:06:57.418 ************************************ 00:06:57.418 11:53:34 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@201 -- # [[ 0 -eq 1 ]] 00:06:57.418 11:53:34 -- spdk/autotest.sh@207 -- # uname -s 00:06:57.418 11:53:34 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:06:57.418 11:53:34 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:57.418 11:53:34 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:57.418 11:53:34 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@269 -- # timing_exit lib 00:06:57.418 11:53:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:57.418 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 11:53:34 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@285 -- # '[' 1 -eq 1 ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@286 -- # export NET_TYPE 00:06:57.418 11:53:34 -- spdk/autotest.sh@289 -- # '[' tcp = rdma ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@292 -- # '[' tcp = tcp ']' 00:06:57.418 11:53:34 -- spdk/autotest.sh@293 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:57.418 11:53:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.418 11:53:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.418 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 ************************************ 00:06:57.418 START TEST nvmf_tcp 00:06:57.418 ************************************ 00:06:57.418 11:53:34 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:57.678 * Looking for test storage... 00:06:57.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:57.678 11:53:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:57.678 11:53:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:57.678 11:53:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:57.678 11:53:34 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.678 11:53:34 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.678 11:53:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.678 ************************************ 00:06:57.678 START TEST nvmf_target_core 00:06:57.678 ************************************ 00:06:57.678 11:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:57.678 * Looking for test storage... 00:06:57.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:57.678 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:57.678 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:57.678 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.678 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:57.678 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.679 ************************************ 00:06:57.679 START TEST nvmf_abort 00:06:57.679 ************************************ 00:06:57.679 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:57.938 * Looking for test storage... 00:06:57.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.938 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.939 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:04.514 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:04.514 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:04.514 Found net devices under 0000:af:00.0: cvl_0_0 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:04.514 Found net devices under 0000:af:00.1: cvl_0_1 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:04.514 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:04.515 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:04.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:07:04.515 00:07:04.515 --- 10.0.0.2 ping statistics --- 00:07:04.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.515 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:07:04.515 00:07:04.515 --- 10.0.0.1 ping statistics --- 00:07:04.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.515 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3953468 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3953468 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3953468 ']' 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.515 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.515 [2024-07-25 11:53:41.180796] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:04.515 [2024-07-25 11:53:41.180852] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.515 [2024-07-25 11:53:41.268704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.515 [2024-07-25 11:53:41.372614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.515 [2024-07-25 11:53:41.372667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.515 [2024-07-25 11:53:41.372681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.515 [2024-07-25 11:53:41.372692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.515 [2024-07-25 11:53:41.372702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.515 [2024-07-25 11:53:41.372827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.515 [2024-07-25 11:53:41.372922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.515 [2024-07-25 11:53:41.372926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 [2024-07-25 11:53:42.179344] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 Malloc0 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 Delay0 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 [2024-07-25 11:53:42.256768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.083 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:05.083 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.084 [2024-07-25 11:53:42.379204] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:07.618 Initializing NVMe Controllers 00:07:07.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:07.618 controller IO queue size 128 less than required 00:07:07.618 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:07.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:07.618 Initialization complete. Launching workers. 00:07:07.618 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29642 00:07:07.618 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29703, failed to submit 62 00:07:07.618 success 29646, unsuccess 57, failed 0 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:07.618 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.619 rmmod nvme_tcp 00:07:07.619 rmmod nvme_fabrics 00:07:07.619 rmmod nvme_keyring 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3953468 ']' 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3953468 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3953468 ']' 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3953468 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3953468 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3953468' 00:07:07.619 killing process with pid 3953468 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3953468 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3953468 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.619 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.157 00:07:10.157 real 0m11.971s 00:07:10.157 user 0m13.874s 00:07:10.157 sys 0m5.502s 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.157 ************************************ 00:07:10.157 END TEST nvmf_abort 00:07:10.157 ************************************ 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.157 ************************************ 00:07:10.157 START TEST nvmf_ns_hotplug_stress 00:07:10.157 ************************************ 00:07:10.157 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:10.157 * Looking for test storage... 00:07:10.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.157 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.434 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:15.435 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:15.435 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:15.435 Found net devices under 0000:af:00.0: cvl_0_0 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:15.435 Found net devices under 0000:af:00.1: cvl_0_1 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.435 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.694 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.694 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.694 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.694 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.695 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.695 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.695 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.695 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.695 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.695 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:07:15.695 00:07:15.695 --- 10.0.0.2 ping statistics --- 00:07:15.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.695 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:15.695 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:07:15.954 00:07:15.954 --- 10.0.0.1 ping statistics --- 00:07:15.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.954 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3957756 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3957756 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3957756 ']' 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.954 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 [2024-07-25 11:53:53.103768] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:15.954 [2024-07-25 11:53:53.103826] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.954 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.954 [2024-07-25 11:53:53.191316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.213 [2024-07-25 11:53:53.292757] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.213 [2024-07-25 11:53:53.292807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.213 [2024-07-25 11:53:53.292821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.213 [2024-07-25 11:53:53.292832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.213 [2024-07-25 11:53:53.292841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.213 [2024-07-25 11:53:53.292966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.213 [2024-07-25 11:53:53.293059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.213 [2024-07-25 11:53:53.293061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.784 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.784 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:16.784 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.784 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.784 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.044 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.044 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:17.044 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.044 [2024-07-25 11:53:54.323432] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.304 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:17.563 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.821 [2024-07-25 11:53:54.866506] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.822 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.081 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:18.340 Malloc0 00:07:18.340 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:18.599 Delay0 00:07:18.599 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.858 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:19.117 NULL1 00:07:19.117 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:19.376 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3958315 00:07:19.376 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:19.376 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:19.376 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.376 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.754 Read completed with error (sct=0, sc=11) 00:07:20.754 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.754 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:20.754 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:21.013 true 00:07:21.013 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:21.013 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.951 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.951 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:21.951 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:22.210 true 00:07:22.210 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:22.210 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.469 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.726 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:22.727 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:22.985 true 00:07:22.985 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:22.985 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.242 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.500 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:23.500 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:23.759 true 00:07:23.759 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:23.759 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.135 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.135 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:25.135 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:25.396 true 00:07:25.396 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:25.396 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.655 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.914 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:25.914 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:26.173 true 00:07:26.173 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:26.173 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.144 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.403 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:27.403 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:27.662 true 00:07:27.662 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:27.662 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.920 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.178 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:28.178 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:28.437 true 00:07:28.437 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:28.437 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.373 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.632 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:29.632 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:29.891 true 00:07:29.891 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:29.891 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.150 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.408 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:30.408 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:30.667 true 00:07:30.667 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:30.667 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.925 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.184 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:31.184 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:31.442 true 00:07:31.442 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:31.442 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.375 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.632 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:32.632 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:32.890 true 00:07:32.890 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:32.890 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.148 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.406 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:33.406 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:33.664 true 00:07:33.664 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:33.664 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.598 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.598 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:34.598 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:34.860 true 00:07:34.860 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:34.860 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.118 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.375 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:35.375 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:35.633 true 00:07:35.633 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:35.633 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.575 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.843 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:36.844 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:37.102 true 00:07:37.102 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:37.102 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.361 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.619 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:37.619 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:37.878 true 00:07:37.878 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:37.878 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.815 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.074 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:39.074 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:39.332 true 00:07:39.332 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:39.332 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.590 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.848 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:39.848 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:40.106 true 00:07:40.106 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:40.106 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.043 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.302 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:41.302 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:41.302 true 00:07:41.561 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:41.561 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.820 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:42.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:42.079 true 00:07:42.338 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:42.338 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.274 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.274 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:43.274 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:43.532 true 00:07:43.532 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:43.532 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.791 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.049 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:44.049 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:44.308 true 00:07:44.308 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:44.308 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.567 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.826 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:44.826 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:45.084 true 00:07:45.084 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:45.084 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.460 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.460 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:46.460 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:46.720 true 00:07:46.721 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:46.721 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.979 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.237 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:47.237 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:47.495 true 00:07:47.495 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:47.495 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.430 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.430 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:48.430 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:48.689 true 00:07:48.689 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:48.689 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.947 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.207 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:49.207 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:49.466 true 00:07:49.466 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:49.466 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.404 Initializing NVMe Controllers 00:07:50.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.404 Controller IO queue size 128, less than required. 00:07:50.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.404 Controller IO queue size 128, less than required. 00:07:50.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:50.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:50.404 Initialization complete. Launching workers. 00:07:50.404 ======================================================== 00:07:50.404 Latency(us) 00:07:50.404 Device Information : IOPS MiB/s Average min max 00:07:50.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 592.53 0.29 114970.67 3470.34 1022468.03 00:07:50.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4221.93 2.06 30320.90 9362.25 574871.75 00:07:50.404 ======================================================== 00:07:50.404 Total : 4814.47 2.35 40739.05 3470.34 1022468.03 00:07:50.404 00:07:50.662 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.920 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:50.920 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:50.920 true 00:07:51.179 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3958315 00:07:51.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3958315) - No such process 00:07:51.179 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3958315 00:07:51.179 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.437 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.437 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:51.437 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:51.437 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:51.437 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.437 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:51.697 null0 00:07:51.697 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.697 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.697 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:51.956 null1 00:07:51.956 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.956 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.956 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:52.214 null2 00:07:52.214 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.214 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.214 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:52.473 null3 00:07:52.473 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.473 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.473 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:52.731 null4 00:07:52.731 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.731 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.731 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:52.988 null5 00:07:52.988 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.988 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.988 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:53.246 null6 00:07:53.246 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.246 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.246 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:53.505 null7 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.505 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3965218 3965219 3965221 3965223 3965225 3965227 3965229 3965231 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.506 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.764 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.764 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.765 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.765 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.765 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.765 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.765 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.765 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.023 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.281 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.540 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.799 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.799 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.799 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.799 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.799 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.799 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.799 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.799 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.799 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.799 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.059 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.318 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.577 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.836 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.836 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.836 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.836 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.150 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.414 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.673 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.933 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.933 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.933 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.933 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.193 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.453 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.713 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.714 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.714 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.974 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.233 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.492 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.751 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.752 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.010 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.270 rmmod nvme_tcp 00:07:59.270 rmmod nvme_fabrics 00:07:59.270 rmmod nvme_keyring 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3957756 ']' 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3957756 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3957756 ']' 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3957756 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3957756 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3957756' 00:07:59.270 killing process with pid 3957756 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3957756 00:07:59.270 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3957756 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.530 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.067 00:08:02.067 real 0m51.844s 00:08:02.067 user 3m38.820s 00:08:02.067 sys 0m16.576s 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:02.067 ************************************ 00:08:02.067 END TEST nvmf_ns_hotplug_stress 00:08:02.067 ************************************ 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.067 ************************************ 00:08:02.067 START TEST nvmf_delete_subsystem 00:08:02.067 ************************************ 00:08:02.067 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:02.067 * Looking for test storage... 00:08:02.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.067 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:08.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.657 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:08.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:08.658 Found net devices under 0000:af:00.0: cvl_0_0 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:08.658 Found net devices under 0000:af:00.1: cvl_0_1 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.658 11:54:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:08.658 00:08:08.658 --- 10.0.0.2 ping statistics --- 00:08:08.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.658 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:08.658 00:08:08.658 --- 10.0.0.1 ping statistics --- 00:08:08.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.658 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3970132 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3970132 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3970132 ']' 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.658 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.658 [2024-07-25 11:54:45.168013] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:08.658 [2024-07-25 11:54:45.168078] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.658 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.658 [2024-07-25 11:54:45.254358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.658 [2024-07-25 11:54:45.343057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.658 [2024-07-25 11:54:45.343098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.658 [2024-07-25 11:54:45.343107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.658 [2024-07-25 11:54:45.343116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.658 [2024-07-25 11:54:45.343124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.658 [2024-07-25 11:54:45.346629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.658 [2024-07-25 11:54:45.346635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.917 [2024-07-25 11:54:46.160996] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.917 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.917 [2024-07-25 11:54:46.181524] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.918 NULL1 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.918 Delay0 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3970204 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:08.918 11:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:09.177 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.177 [2024-07-25 11:54:46.303954] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:11.119 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.119 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.119 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 starting I/O failed: -6 00:08:11.378 starting I/O failed: -6 00:08:11.378 starting I/O failed: -6 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.378 Read completed with error (sct=0, sc=8) 00:08:11.378 starting I/O failed: -6 00:08:11.378 starting I/O failed: -6 00:08:11.378 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 starting I/O failed: -6 00:08:11.379 [2024-07-25 11:54:48.577257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65cc000c00 is same with the state(5) to be set 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 [2024-07-25 11:54:48.579224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65cc00d000 is same with the state(5) to be set 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Read completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 Write completed with error (sct=0, sc=8) 00:08:11.379 [2024-07-25 11:54:48.579773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65cc00d660 is same with the state(5) to be set 00:08:12.316 [2024-07-25 11:54:49.524815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a49500 is same with the state(5) to be set 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 [2024-07-25 11:54:49.581302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6acb0 is same with the state(5) to be set 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 [2024-07-25 11:54:49.582894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d650 is same with the state(5) to be set 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 [2024-07-25 11:54:49.583113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65cc00d330 is same with the state(5) to be set 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Write completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 Read completed with error (sct=0, sc=8) 00:08:12.316 [2024-07-25 11:54:49.583572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69d00 is same with the state(5) to be set 00:08:12.316 Initializing NVMe Controllers 00:08:12.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.316 Controller IO queue size 128, less than required. 00:08:12.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:12.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:12.316 Initialization complete. Launching workers. 00:08:12.316 ======================================================== 00:08:12.316 Latency(us) 00:08:12.316 Device Information : IOPS MiB/s Average min max 00:08:12.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.99 0.09 961257.04 1268.66 1019583.80 00:08:12.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 144.47 0.07 925019.05 2039.76 1021178.01 00:08:12.316 ======================================================== 00:08:12.316 Total : 334.46 0.16 945603.94 1268.66 1021178.01 00:08:12.316 00:08:12.316 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.316 [2024-07-25 11:54:49.584759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a49500 (9): Bad file descriptor 00:08:12.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:12.316 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:12.316 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3970204 00:08:12.316 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3970204 00:08:12.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3970204) - No such process 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3970204 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3970204 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3970204 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.884 [2024-07-25 11:54:50.113637] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3970976 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:12.884 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.884 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.143 [2024-07-25 11:54:50.213407] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:13.402 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.402 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:13.402 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.969 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.969 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:13.969 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.537 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.537 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:14.537 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.105 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.105 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:15.105 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.364 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.364 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:15.364 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.932 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.932 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:15.932 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.190 Initializing NVMe Controllers 00:08:16.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.190 Controller IO queue size 128, less than required. 00:08:16.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:16.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:16.190 Initialization complete. Launching workers. 00:08:16.190 ======================================================== 00:08:16.190 Latency(us) 00:08:16.190 Device Information : IOPS MiB/s Average min max 00:08:16.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005160.32 1000258.80 1019690.20 00:08:16.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006718.22 1000373.81 1019560.86 00:08:16.190 ======================================================== 00:08:16.190 Total : 256.00 0.12 1005939.27 1000258.80 1019690.20 00:08:16.190 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3970976 00:08:16.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3970976) - No such process 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3970976 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.449 rmmod nvme_tcp 00:08:16.449 rmmod nvme_fabrics 00:08:16.449 rmmod nvme_keyring 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3970132 ']' 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3970132 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3970132 ']' 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3970132 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.449 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3970132 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3970132' 00:08:16.708 killing process with pid 3970132 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3970132 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3970132 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.708 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.247 00:08:19.247 real 0m17.147s 00:08:19.247 user 0m31.376s 00:08:19.247 sys 0m5.548s 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.247 ************************************ 00:08:19.247 END TEST nvmf_delete_subsystem 00:08:19.247 ************************************ 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.247 ************************************ 00:08:19.247 START TEST nvmf_host_management 00:08:19.247 ************************************ 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:19.247 * Looking for test storage... 00:08:19.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.247 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.248 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:25.861 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:25.861 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.861 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:25.862 Found net devices under 0000:af:00.0: cvl_0_0 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:25.862 Found net devices under 0000:af:00.1: cvl_0_1 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.862 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:08:25.862 00:08:25.862 --- 10.0.0.2 ping statistics --- 00:08:25.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.862 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:25.862 00:08:25.862 --- 10.0.0.1 ping statistics --- 00:08:25.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.862 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3975239 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3975239 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3975239 ']' 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.862 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.862 [2024-07-25 11:55:02.295109] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:25.862 [2024-07-25 11:55:02.295169] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.862 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.862 [2024-07-25 11:55:02.384795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.862 [2024-07-25 11:55:02.491749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.862 [2024-07-25 11:55:02.491795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.862 [2024-07-25 11:55:02.491808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.862 [2024-07-25 11:55:02.491819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.862 [2024-07-25 11:55:02.491828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.862 [2024-07-25 11:55:02.491949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.862 [2024-07-25 11:55:02.492061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.862 [2024-07-25 11:55:02.492101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.862 [2024-07-25 11:55:02.492100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.122 [2024-07-25 11:55:03.292526] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.122 Malloc0 00:08:26.122 [2024-07-25 11:55:03.368017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3975542 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3975542 /var/tmp/bdevperf.sock 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3975542 ']' 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.122 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:26.123 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:26.123 { 00:08:26.123 "params": { 00:08:26.123 "name": "Nvme$subsystem", 00:08:26.123 "trtype": "$TEST_TRANSPORT", 00:08:26.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.123 "adrfam": "ipv4", 00:08:26.123 "trsvcid": "$NVMF_PORT", 00:08:26.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.123 "hdgst": ${hdgst:-false}, 00:08:26.123 "ddgst": ${ddgst:-false} 00:08:26.123 }, 00:08:26.123 "method": "bdev_nvme_attach_controller" 00:08:26.123 } 00:08:26.123 EOF 00:08:26.123 )") 00:08:26.382 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:26.382 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:26.382 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:26.382 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:26.382 "params": { 00:08:26.382 "name": "Nvme0", 00:08:26.382 "trtype": "tcp", 00:08:26.382 "traddr": "10.0.0.2", 00:08:26.382 "adrfam": "ipv4", 00:08:26.382 "trsvcid": "4420", 00:08:26.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:26.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:26.382 "hdgst": false, 00:08:26.382 "ddgst": false 00:08:26.382 }, 00:08:26.382 "method": "bdev_nvme_attach_controller" 00:08:26.382 }' 00:08:26.382 [2024-07-25 11:55:03.468005] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:26.382 [2024-07-25 11:55:03.468071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975542 ] 00:08:26.382 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.382 [2024-07-25 11:55:03.552897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.382 [2024-07-25 11:55:03.641151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.641 Running I/O for 10 seconds... 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=429 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 429 -ge 100 ']' 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.210 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.210 [2024-07-25 11:55:04.412278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.412979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.413017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.413045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.413073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.413099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.413124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.210 [2024-07-25 11:55:04.413152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.413993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.414023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.414052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.414083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.414112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.414142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.414173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.414202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc500 is same with the state(5) to be set 00:08:27.211 [2024-07-25 11:55:04.415156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.211 [2024-07-25 11:55:04.415197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.415211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.211 [2024-07-25 11:55:04.415221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.415232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.211 [2024-07-25 11:55:04.415241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.415251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.211 [2024-07-25 11:55:04.415260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.415270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868e90 is same with the state(5) to be set 00:08:27.211 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.211 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:27.211 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.211 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.211 [2024-07-25 11:55:04.426186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868e90 (9): Bad file descriptor 00:08:27.211 [2024-07-25 11:55:04.426278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.211 [2024-07-25 11:55:04.426581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.211 [2024-07-25 11:55:04.426592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.426989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.426999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.212 [2024-07-25 11:55:04.427382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.212 [2024-07-25 11:55:04.427391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:27.213 [2024-07-25 11:55:04.427672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.213 [2024-07-25 11:55:04.427748] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c79f80 was disconnected and freed. reset controller. 00:08:27.213 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.213 [2024-07-25 11:55:04.429097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:27.213 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:27.213 task offset: 62976 on job bdev=Nvme0n1 fails 00:08:27.213 00:08:27.213 Latency(us) 00:08:27.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.213 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:27.213 Job: Nvme0n1 ended in about 0.50 seconds with error 00:08:27.213 Verification LBA range: start 0x0 length 0x400 00:08:27.213 Nvme0n1 : 0.50 975.14 60.95 126.85 0.00 56370.43 1966.08 52905.43 00:08:27.213 =================================================================================================================== 00:08:27.213 Total : 975.14 60.95 126.85 0.00 56370.43 1966.08 52905.43 00:08:27.213 [2024-07-25 11:55:04.431400] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.472 [2024-07-25 11:55:04.575892] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3975542 00:08:28.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3975542) - No such process 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.410 { 00:08:28.410 "params": { 00:08:28.410 "name": "Nvme$subsystem", 00:08:28.410 "trtype": "$TEST_TRANSPORT", 00:08:28.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.410 "adrfam": "ipv4", 00:08:28.410 "trsvcid": "$NVMF_PORT", 00:08:28.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.410 "hdgst": ${hdgst:-false}, 00:08:28.410 "ddgst": ${ddgst:-false} 00:08:28.410 }, 00:08:28.410 "method": "bdev_nvme_attach_controller" 00:08:28.410 } 00:08:28.410 EOF 00:08:28.410 )") 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:28.410 11:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.410 "params": { 00:08:28.410 "name": "Nvme0", 00:08:28.410 "trtype": "tcp", 00:08:28.411 "traddr": "10.0.0.2", 00:08:28.411 "adrfam": "ipv4", 00:08:28.411 "trsvcid": "4420", 00:08:28.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:28.411 "hdgst": false, 00:08:28.411 "ddgst": false 00:08:28.411 }, 00:08:28.411 "method": "bdev_nvme_attach_controller" 00:08:28.411 }' 00:08:28.411 [2024-07-25 11:55:05.488082] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:28.411 [2024-07-25 11:55:05.488142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975958 ] 00:08:28.411 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.411 [2024-07-25 11:55:05.568775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.411 [2024-07-25 11:55:05.656046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.978 Running I/O for 1 seconds... 00:08:29.916 00:08:29.916 Latency(us) 00:08:29.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:29.916 Verification LBA range: start 0x0 length 0x400 00:08:29.916 Nvme0n1 : 1.03 1056.11 66.01 0.00 0.00 59462.89 11856.06 53143.74 00:08:29.916 =================================================================================================================== 00:08:29.916 Total : 1056.11 66.01 0.00 0.00 59462.89 11856.06 53143.74 00:08:29.916 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:29.916 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.175 rmmod nvme_tcp 00:08:30.175 rmmod nvme_fabrics 00:08:30.175 rmmod nvme_keyring 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3975239 ']' 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3975239 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3975239 ']' 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3975239 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3975239 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:30.175 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:30.176 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3975239' 00:08:30.176 killing process with pid 3975239 00:08:30.176 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3975239 00:08:30.176 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3975239 00:08:30.435 [2024-07-25 11:55:07.613590] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.435 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:32.994 00:08:32.994 real 0m13.591s 00:08:32.994 user 0m25.056s 00:08:32.994 sys 0m5.707s 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.994 ************************************ 00:08:32.994 END TEST nvmf_host_management 00:08:32.994 ************************************ 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.994 ************************************ 00:08:32.994 START TEST nvmf_lvol 00:08:32.994 ************************************ 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.994 * Looking for test storage... 00:08:32.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.994 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.995 11:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:38.272 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:38.273 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:38.273 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:38.273 Found net devices under 0000:af:00.0: cvl_0_0 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:38.273 Found net devices under 0000:af:00.1: cvl_0_1 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.273 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:08:38.533 00:08:38.533 --- 10.0.0.2 ping statistics --- 00:08:38.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.533 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:08:38.533 00:08:38.533 --- 10.0.0.1 ping statistics --- 00:08:38.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.533 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.533 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3979986 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3979986 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3979986 ']' 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.792 11:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.792 [2024-07-25 11:55:15.904746] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:38.792 [2024-07-25 11:55:15.904811] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.792 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.792 [2024-07-25 11:55:15.995419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.792 [2024-07-25 11:55:16.086945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.792 [2024-07-25 11:55:16.086986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.792 [2024-07-25 11:55:16.086996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.792 [2024-07-25 11:55:16.087005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.792 [2024-07-25 11:55:16.087012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.792 [2024-07-25 11:55:16.087064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.792 [2024-07-25 11:55:16.087177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.792 [2024-07-25 11:55:16.087178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.764 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.764 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:39.764 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.764 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.764 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:39.764 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.764 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:39.764 [2024-07-25 11:55:17.038995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.024 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.284 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:40.284 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.542 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:40.542 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:40.801 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:41.060 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fd460bbc-9e64-429c-a554-9394e6a39157 00:08:41.060 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd460bbc-9e64-429c-a554-9394e6a39157 lvol 20 00:08:41.318 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5bf4ca96-029d-4c0b-80d7-01b69ba7755f 00:08:41.318 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.576 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5bf4ca96-029d-4c0b-80d7-01b69ba7755f 00:08:41.834 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.094 [2024-07-25 11:55:19.175029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.094 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.353 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3980636 00:08:42.353 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:42.353 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:42.353 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.290 11:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5bf4ca96-029d-4c0b-80d7-01b69ba7755f MY_SNAPSHOT 00:08:43.548 11:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=56a11d82-4f74-42b7-be51-66d6dd60003f 00:08:43.548 11:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5bf4ca96-029d-4c0b-80d7-01b69ba7755f 30 00:08:43.807 11:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 56a11d82-4f74-42b7-be51-66d6dd60003f MY_CLONE 00:08:44.066 11:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=de5c4939-6a30-4df1-b33a-26195aff267f 00:08:44.066 11:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate de5c4939-6a30-4df1-b33a-26195aff267f 00:08:45.004 11:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3980636 00:08:53.118 Initializing NVMe Controllers 00:08:53.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:53.118 Controller IO queue size 128, less than required. 00:08:53.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:53.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:53.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:53.118 Initialization complete. Launching workers. 00:08:53.118 ======================================================== 00:08:53.118 Latency(us) 00:08:53.118 Device Information : IOPS MiB/s Average min max 00:08:53.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7016.60 27.41 18262.80 748.23 100744.23 00:08:53.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8618.20 33.66 14868.70 4482.21 74114.88 00:08:53.119 ======================================================== 00:08:53.119 Total : 15634.80 61.07 16391.91 748.23 100744.23 00:08:53.119 00:08:53.119 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:53.119 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5bf4ca96-029d-4c0b-80d7-01b69ba7755f 00:08:53.377 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd460bbc-9e64-429c-a554-9394e6a39157 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.636 rmmod nvme_tcp 00:08:53.636 rmmod nvme_fabrics 00:08:53.636 rmmod nvme_keyring 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3979986 ']' 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3979986 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3979986 ']' 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3979986 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3979986 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3979986' 00:08:53.636 killing process with pid 3979986 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3979986 00:08:53.636 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3979986 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.895 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.468 00:08:56.468 real 0m23.406s 00:08:56.468 user 1m8.883s 00:08:56.468 sys 0m7.338s 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.468 ************************************ 00:08:56.468 END TEST nvmf_lvol 00:08:56.468 ************************************ 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.468 ************************************ 00:08:56.468 START TEST nvmf_lvs_grow 00:08:56.468 ************************************ 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:56.468 * Looking for test storage... 00:08:56.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.468 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.469 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:01.749 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:01.749 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:01.749 Found net devices under 0000:af:00.0: cvl_0_0 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:01.749 Found net devices under 0000:af:00.1: cvl_0_1 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.749 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:09:02.009 00:09:02.009 --- 10.0.0.2 ping statistics --- 00:09:02.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.009 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:02.009 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:09:02.268 00:09:02.268 --- 10.0.0.1 ping statistics --- 00:09:02.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.268 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:02.268 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.268 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:02.268 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.268 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3986461 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3986461 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3986461 ']' 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.269 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.269 [2024-07-25 11:55:39.414458] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:02.269 [2024-07-25 11:55:39.414519] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.269 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.269 [2024-07-25 11:55:39.499852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.528 [2024-07-25 11:55:39.589372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.528 [2024-07-25 11:55:39.589420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.528 [2024-07-25 11:55:39.589430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.528 [2024-07-25 11:55:39.589439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.528 [2024-07-25 11:55:39.589447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.528 [2024-07-25 11:55:39.589476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.528 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.528 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:02.528 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.528 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.528 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.528 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.528 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.787 [2024-07-25 11:55:39.953278] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.788 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:02.788 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.788 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.788 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.788 ************************************ 00:09:02.788 START TEST lvs_grow_clean 00:09:02.788 ************************************ 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.788 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.046 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:03.046 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:03.305 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:03.305 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:03.305 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:03.565 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:03.565 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:03.565 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 lvol 150 00:09:04.133 11:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=df89fb74-9828-460e-83fc-e98196ade4fc 00:09:04.133 11:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:04.133 11:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:04.392 [2024-07-25 11:55:41.518010] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:04.392 [2024-07-25 11:55:41.518076] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:04.392 true 00:09:04.392 11:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:04.392 11:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:04.650 11:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:04.650 11:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:04.910 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df89fb74-9828-460e-83fc-e98196ade4fc 00:09:05.169 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:05.428 [2024-07-25 11:55:42.477013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.428 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.428 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3987030 00:09:05.428 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.428 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:05.428 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3987030 /var/tmp/bdevperf.sock 00:09:05.428 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3987030 ']' 00:09:05.688 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:05.688 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.688 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:05.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:05.688 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.688 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:05.688 [2024-07-25 11:55:42.774997] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:05.688 [2024-07-25 11:55:42.775057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987030 ] 00:09:05.688 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.688 [2024-07-25 11:55:42.856658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.688 [2024-07-25 11:55:42.961043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.626 11:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.626 11:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:06.626 11:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:06.885 Nvme0n1 00:09:06.885 11:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:07.453 [ 00:09:07.453 { 00:09:07.453 "name": "Nvme0n1", 00:09:07.453 "aliases": [ 00:09:07.453 "df89fb74-9828-460e-83fc-e98196ade4fc" 00:09:07.453 ], 00:09:07.453 "product_name": "NVMe disk", 00:09:07.453 "block_size": 4096, 00:09:07.453 "num_blocks": 38912, 00:09:07.453 "uuid": "df89fb74-9828-460e-83fc-e98196ade4fc", 00:09:07.453 "assigned_rate_limits": { 00:09:07.453 "rw_ios_per_sec": 0, 00:09:07.453 "rw_mbytes_per_sec": 0, 00:09:07.453 "r_mbytes_per_sec": 0, 00:09:07.453 "w_mbytes_per_sec": 0 00:09:07.453 }, 00:09:07.453 "claimed": false, 00:09:07.453 "zoned": false, 00:09:07.453 "supported_io_types": { 00:09:07.453 "read": true, 00:09:07.453 "write": true, 00:09:07.453 "unmap": true, 00:09:07.453 "flush": true, 00:09:07.453 "reset": true, 00:09:07.453 "nvme_admin": true, 00:09:07.453 "nvme_io": true, 00:09:07.453 "nvme_io_md": false, 00:09:07.453 "write_zeroes": true, 00:09:07.453 "zcopy": false, 00:09:07.453 "get_zone_info": false, 00:09:07.453 "zone_management": false, 00:09:07.453 "zone_append": false, 00:09:07.453 "compare": true, 00:09:07.453 "compare_and_write": true, 00:09:07.453 "abort": true, 00:09:07.453 "seek_hole": false, 00:09:07.453 "seek_data": false, 00:09:07.453 "copy": true, 00:09:07.453 "nvme_iov_md": false 00:09:07.453 }, 00:09:07.453 "memory_domains": [ 00:09:07.453 { 00:09:07.453 "dma_device_id": "system", 00:09:07.453 "dma_device_type": 1 00:09:07.453 } 00:09:07.453 ], 00:09:07.453 "driver_specific": { 00:09:07.453 "nvme": [ 00:09:07.453 { 00:09:07.453 "trid": { 00:09:07.453 "trtype": "TCP", 00:09:07.453 "adrfam": "IPv4", 00:09:07.453 "traddr": "10.0.0.2", 00:09:07.453 "trsvcid": "4420", 00:09:07.453 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:07.453 }, 00:09:07.453 "ctrlr_data": { 00:09:07.453 "cntlid": 1, 00:09:07.453 "vendor_id": "0x8086", 00:09:07.453 "model_number": "SPDK bdev Controller", 00:09:07.453 "serial_number": "SPDK0", 00:09:07.453 "firmware_revision": "24.09", 00:09:07.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.453 "oacs": { 00:09:07.453 "security": 0, 00:09:07.453 "format": 0, 00:09:07.453 "firmware": 0, 00:09:07.453 "ns_manage": 0 00:09:07.453 }, 00:09:07.453 "multi_ctrlr": true, 00:09:07.453 "ana_reporting": false 00:09:07.453 }, 00:09:07.453 "vs": { 00:09:07.453 "nvme_version": "1.3" 00:09:07.453 }, 00:09:07.453 "ns_data": { 00:09:07.453 "id": 1, 00:09:07.453 "can_share": true 00:09:07.453 } 00:09:07.453 } 00:09:07.453 ], 00:09:07.453 "mp_policy": "active_passive" 00:09:07.453 } 00:09:07.453 } 00:09:07.453 ] 00:09:07.453 11:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3987304 00:09:07.453 11:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:07.453 11:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:07.453 Running I/O for 10 seconds... 00:09:08.848 Latency(us) 00:09:08.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.848 Nvme0n1 : 1.00 14718.00 57.49 0.00 0.00 0.00 0.00 0.00 00:09:08.848 =================================================================================================================== 00:09:08.848 Total : 14718.00 57.49 0.00 0.00 0.00 0.00 0.00 00:09:08.848 00:09:09.416 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:09.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.416 Nvme0n1 : 2.00 14795.00 57.79 0.00 0.00 0.00 0.00 0.00 00:09:09.416 =================================================================================================================== 00:09:09.416 Total : 14795.00 57.79 0.00 0.00 0.00 0.00 0.00 00:09:09.416 00:09:09.674 true 00:09:09.674 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:09.674 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:09.932 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:09.932 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:09.932 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3987304 00:09:10.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.499 Nvme0n1 : 3.00 14834.00 57.95 0.00 0.00 0.00 0.00 0.00 00:09:10.499 =================================================================================================================== 00:09:10.499 Total : 14834.00 57.95 0.00 0.00 0.00 0.00 0.00 00:09:10.499 00:09:11.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.435 Nvme0n1 : 4.00 14863.50 58.06 0.00 0.00 0.00 0.00 0.00 00:09:11.435 =================================================================================================================== 00:09:11.435 Total : 14863.50 58.06 0.00 0.00 0.00 0.00 0.00 00:09:11.435 00:09:12.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.812 Nvme0n1 : 5.00 14886.00 58.15 0.00 0.00 0.00 0.00 0.00 00:09:12.812 =================================================================================================================== 00:09:12.812 Total : 14886.00 58.15 0.00 0.00 0.00 0.00 0.00 00:09:12.812 00:09:13.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.747 Nvme0n1 : 6.00 14895.67 58.19 0.00 0.00 0.00 0.00 0.00 00:09:13.747 =================================================================================================================== 00:09:13.747 Total : 14895.67 58.19 0.00 0.00 0.00 0.00 0.00 00:09:13.747 00:09:14.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.683 Nvme0n1 : 7.00 14912.86 58.25 0.00 0.00 0.00 0.00 0.00 00:09:14.683 =================================================================================================================== 00:09:14.683 Total : 14912.86 58.25 0.00 0.00 0.00 0.00 0.00 00:09:14.683 00:09:15.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.620 Nvme0n1 : 8.00 14926.75 58.31 0.00 0.00 0.00 0.00 0.00 00:09:15.620 =================================================================================================================== 00:09:15.620 Total : 14926.75 58.31 0.00 0.00 0.00 0.00 0.00 00:09:15.620 00:09:16.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.555 Nvme0n1 : 9.00 14939.33 58.36 0.00 0.00 0.00 0.00 0.00 00:09:16.555 =================================================================================================================== 00:09:16.555 Total : 14939.33 58.36 0.00 0.00 0.00 0.00 0.00 00:09:16.555 00:09:17.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.491 Nvme0n1 : 10.00 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:09:17.491 =================================================================================================================== 00:09:17.491 Total : 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:09:17.491 00:09:17.491 00:09:17.491 Latency(us) 00:09:17.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.491 Nvme0n1 : 10.01 14950.26 58.40 0.00 0.00 8552.79 5362.04 13464.67 00:09:17.491 =================================================================================================================== 00:09:17.491 Total : 14950.26 58.40 0.00 0.00 8552.79 5362.04 13464.67 00:09:17.491 0 00:09:17.491 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3987030 00:09:17.491 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3987030 ']' 00:09:17.491 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3987030 00:09:17.491 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:17.491 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.491 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3987030 00:09:17.750 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:17.750 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:17.750 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3987030' 00:09:17.750 killing process with pid 3987030 00:09:17.750 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3987030 00:09:17.750 Received shutdown signal, test time was about 10.000000 seconds 00:09:17.750 00:09:17.750 Latency(us) 00:09:17.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.750 =================================================================================================================== 00:09:17.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:17.750 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3987030 00:09:17.750 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:18.009 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:18.268 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:18.268 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:18.527 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:18.527 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:18.527 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.097 [2024-07-25 11:55:56.255339] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:19.097 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:19.356 request: 00:09:19.356 { 00:09:19.356 "uuid": "6f874b3a-db9e-4435-a2b5-a0616ba304c1", 00:09:19.356 "method": "bdev_lvol_get_lvstores", 00:09:19.356 "req_id": 1 00:09:19.356 } 00:09:19.356 Got JSON-RPC error response 00:09:19.356 response: 00:09:19.356 { 00:09:19.357 "code": -19, 00:09:19.357 "message": "No such device" 00:09:19.357 } 00:09:19.357 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:19.357 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.357 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:19.357 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.357 11:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:19.925 aio_bdev 00:09:19.925 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df89fb74-9828-460e-83fc-e98196ade4fc 00:09:19.925 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=df89fb74-9828-460e-83fc-e98196ade4fc 00:09:19.925 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.925 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:19.925 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.925 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.925 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:20.493 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df89fb74-9828-460e-83fc-e98196ade4fc -t 2000 00:09:20.751 [ 00:09:20.751 { 00:09:20.751 "name": "df89fb74-9828-460e-83fc-e98196ade4fc", 00:09:20.751 "aliases": [ 00:09:20.751 "lvs/lvol" 00:09:20.751 ], 00:09:20.751 "product_name": "Logical Volume", 00:09:20.751 "block_size": 4096, 00:09:20.751 "num_blocks": 38912, 00:09:20.751 "uuid": "df89fb74-9828-460e-83fc-e98196ade4fc", 00:09:20.751 "assigned_rate_limits": { 00:09:20.751 "rw_ios_per_sec": 0, 00:09:20.751 "rw_mbytes_per_sec": 0, 00:09:20.751 "r_mbytes_per_sec": 0, 00:09:20.751 "w_mbytes_per_sec": 0 00:09:20.751 }, 00:09:20.751 "claimed": false, 00:09:20.751 "zoned": false, 00:09:20.751 "supported_io_types": { 00:09:20.751 "read": true, 00:09:20.751 "write": true, 00:09:20.751 "unmap": true, 00:09:20.751 "flush": false, 00:09:20.751 "reset": true, 00:09:20.751 "nvme_admin": false, 00:09:20.751 "nvme_io": false, 00:09:20.751 "nvme_io_md": false, 00:09:20.751 "write_zeroes": true, 00:09:20.751 "zcopy": false, 00:09:20.751 "get_zone_info": false, 00:09:20.751 "zone_management": false, 00:09:20.751 "zone_append": false, 00:09:20.751 "compare": false, 00:09:20.751 "compare_and_write": false, 00:09:20.751 "abort": false, 00:09:20.751 "seek_hole": true, 00:09:20.751 "seek_data": true, 00:09:20.751 "copy": false, 00:09:20.751 "nvme_iov_md": false 00:09:20.751 }, 00:09:20.751 "driver_specific": { 00:09:20.751 "lvol": { 00:09:20.751 "lvol_store_uuid": "6f874b3a-db9e-4435-a2b5-a0616ba304c1", 00:09:20.751 "base_bdev": "aio_bdev", 00:09:20.751 "thin_provision": false, 00:09:20.751 "num_allocated_clusters": 38, 00:09:20.751 "snapshot": false, 00:09:20.751 "clone": false, 00:09:20.751 "esnap_clone": false 00:09:20.751 } 00:09:20.751 } 00:09:20.751 } 00:09:20.751 ] 00:09:20.751 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:21.010 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:21.010 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:21.010 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:21.010 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:21.010 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:21.269 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:21.269 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df89fb74-9828-460e-83fc-e98196ade4fc 00:09:21.837 11:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f874b3a-db9e-4435-a2b5-a0616ba304c1 00:09:22.404 11:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.971 00:09:22.971 real 0m20.100s 00:09:22.971 user 0m19.936s 00:09:22.971 sys 0m1.925s 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:22.971 ************************************ 00:09:22.971 END TEST lvs_grow_clean 00:09:22.971 ************************************ 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:22.971 ************************************ 00:09:22.971 START TEST lvs_grow_dirty 00:09:22.971 ************************************ 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.971 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.229 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:23.229 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:23.795 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:23.795 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:23.795 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:24.054 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:24.054 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:24.054 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c6b5f444-6425-4803-ae90-43eefa42b37d lvol 150 00:09:24.312 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6ed16f33-a35f-476e-b9fd-a10b3f5ea7af 00:09:24.312 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.312 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:24.878 [2024-07-25 11:56:01.920710] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:24.878 [2024-07-25 11:56:01.920777] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:24.878 true 00:09:24.878 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:24.878 11:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:25.137 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:25.137 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:25.703 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6ed16f33-a35f-476e-b9fd-a10b3f5ea7af 00:09:25.703 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:26.270 [2024-07-25 11:56:03.409275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.270 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3990858 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3990858 /var/tmp/bdevperf.sock 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3990858 ']' 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.528 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.528 [2024-07-25 11:56:03.763041] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:26.528 [2024-07-25 11:56:03.763163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3990858 ] 00:09:26.528 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.787 [2024-07-25 11:56:03.880400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.787 [2024-07-25 11:56:03.983684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.722 11:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.722 11:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:27.722 11:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:28.290 Nvme0n1 00:09:28.290 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.875 [ 00:09:28.875 { 00:09:28.875 "name": "Nvme0n1", 00:09:28.875 "aliases": [ 00:09:28.875 "6ed16f33-a35f-476e-b9fd-a10b3f5ea7af" 00:09:28.875 ], 00:09:28.875 "product_name": "NVMe disk", 00:09:28.875 "block_size": 4096, 00:09:28.875 "num_blocks": 38912, 00:09:28.875 "uuid": "6ed16f33-a35f-476e-b9fd-a10b3f5ea7af", 00:09:28.875 "assigned_rate_limits": { 00:09:28.875 "rw_ios_per_sec": 0, 00:09:28.875 "rw_mbytes_per_sec": 0, 00:09:28.875 "r_mbytes_per_sec": 0, 00:09:28.875 "w_mbytes_per_sec": 0 00:09:28.875 }, 00:09:28.875 "claimed": false, 00:09:28.875 "zoned": false, 00:09:28.875 "supported_io_types": { 00:09:28.875 "read": true, 00:09:28.875 "write": true, 00:09:28.875 "unmap": true, 00:09:28.875 "flush": true, 00:09:28.875 "reset": true, 00:09:28.875 "nvme_admin": true, 00:09:28.875 "nvme_io": true, 00:09:28.875 "nvme_io_md": false, 00:09:28.875 "write_zeroes": true, 00:09:28.875 "zcopy": false, 00:09:28.875 "get_zone_info": false, 00:09:28.875 "zone_management": false, 00:09:28.875 "zone_append": false, 00:09:28.875 "compare": true, 00:09:28.875 "compare_and_write": true, 00:09:28.875 "abort": true, 00:09:28.875 "seek_hole": false, 00:09:28.875 "seek_data": false, 00:09:28.875 "copy": true, 00:09:28.875 "nvme_iov_md": false 00:09:28.875 }, 00:09:28.875 "memory_domains": [ 00:09:28.875 { 00:09:28.875 "dma_device_id": "system", 00:09:28.875 "dma_device_type": 1 00:09:28.875 } 00:09:28.875 ], 00:09:28.875 "driver_specific": { 00:09:28.875 "nvme": [ 00:09:28.875 { 00:09:28.875 "trid": { 00:09:28.875 "trtype": "TCP", 00:09:28.875 "adrfam": "IPv4", 00:09:28.875 "traddr": "10.0.0.2", 00:09:28.876 "trsvcid": "4420", 00:09:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.876 }, 00:09:28.876 "ctrlr_data": { 00:09:28.876 "cntlid": 1, 00:09:28.876 "vendor_id": "0x8086", 00:09:28.876 "model_number": "SPDK bdev Controller", 00:09:28.876 "serial_number": "SPDK0", 00:09:28.876 "firmware_revision": "24.09", 00:09:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.876 "oacs": { 00:09:28.876 "security": 0, 00:09:28.876 "format": 0, 00:09:28.876 "firmware": 0, 00:09:28.876 "ns_manage": 0 00:09:28.876 }, 00:09:28.876 "multi_ctrlr": true, 00:09:28.876 "ana_reporting": false 00:09:28.876 }, 00:09:28.876 "vs": { 00:09:28.876 "nvme_version": "1.3" 00:09:28.876 }, 00:09:28.876 "ns_data": { 00:09:28.876 "id": 1, 00:09:28.876 "can_share": true 00:09:28.876 } 00:09:28.876 } 00:09:28.876 ], 00:09:28.876 "mp_policy": "active_passive" 00:09:28.876 } 00:09:28.876 } 00:09:28.876 ] 00:09:28.876 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3991299 00:09:28.876 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.876 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.876 Running I/O for 10 seconds... 00:09:29.812 Latency(us) 00:09:29.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.812 Nvme0n1 : 1.00 15339.00 59.92 0.00 0.00 0.00 0.00 0.00 00:09:29.812 =================================================================================================================== 00:09:29.812 Total : 15339.00 59.92 0.00 0.00 0.00 0.00 0.00 00:09:29.812 00:09:30.748 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:31.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.006 Nvme0n1 : 2.00 15433.50 60.29 0.00 0.00 0.00 0.00 0.00 00:09:31.006 =================================================================================================================== 00:09:31.006 Total : 15433.50 60.29 0.00 0.00 0.00 0.00 0.00 00:09:31.006 00:09:31.265 true 00:09:31.265 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:31.265 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:31.523 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:31.523 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:31.523 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3991299 00:09:32.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.091 Nvme0n1 : 3.00 15475.00 60.45 0.00 0.00 0.00 0.00 0.00 00:09:32.091 =================================================================================================================== 00:09:32.091 Total : 15475.00 60.45 0.00 0.00 0.00 0.00 0.00 00:09:32.091 00:09:33.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.080 Nvme0n1 : 4.00 15511.50 60.59 0.00 0.00 0.00 0.00 0.00 00:09:33.080 =================================================================================================================== 00:09:33.080 Total : 15511.50 60.59 0.00 0.00 0.00 0.00 0.00 00:09:33.080 00:09:34.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.015 Nvme0n1 : 5.00 15546.40 60.73 0.00 0.00 0.00 0.00 0.00 00:09:34.015 =================================================================================================================== 00:09:34.015 Total : 15546.40 60.73 0.00 0.00 0.00 0.00 0.00 00:09:34.015 00:09:34.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.951 Nvme0n1 : 6.00 15564.50 60.80 0.00 0.00 0.00 0.00 0.00 00:09:34.951 =================================================================================================================== 00:09:34.951 Total : 15564.50 60.80 0.00 0.00 0.00 0.00 0.00 00:09:34.951 00:09:35.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.899 Nvme0n1 : 7.00 15581.86 60.87 0.00 0.00 0.00 0.00 0.00 00:09:35.899 =================================================================================================================== 00:09:35.899 Total : 15581.86 60.87 0.00 0.00 0.00 0.00 0.00 00:09:35.899 00:09:36.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.833 Nvme0n1 : 8.00 15594.75 60.92 0.00 0.00 0.00 0.00 0.00 00:09:36.833 =================================================================================================================== 00:09:36.833 Total : 15594.75 60.92 0.00 0.00 0.00 0.00 0.00 00:09:36.833 00:09:38.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.208 Nvme0n1 : 9.00 15611.78 60.98 0.00 0.00 0.00 0.00 0.00 00:09:38.208 =================================================================================================================== 00:09:38.208 Total : 15611.78 60.98 0.00 0.00 0.00 0.00 0.00 00:09:38.208 00:09:39.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.144 Nvme0n1 : 10.00 15619.20 61.01 0.00 0.00 0.00 0.00 0.00 00:09:39.144 =================================================================================================================== 00:09:39.144 Total : 15619.20 61.01 0.00 0.00 0.00 0.00 0.00 00:09:39.144 00:09:39.144 00:09:39.144 Latency(us) 00:09:39.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.144 Nvme0n1 : 10.01 15617.22 61.00 0.00 0.00 8189.84 5093.93 14179.61 00:09:39.144 =================================================================================================================== 00:09:39.144 Total : 15617.22 61.00 0.00 0.00 8189.84 5093.93 14179.61 00:09:39.144 0 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3990858 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3990858 ']' 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3990858 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3990858 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3990858' 00:09:39.144 killing process with pid 3990858 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3990858 00:09:39.144 Received shutdown signal, test time was about 10.000000 seconds 00:09:39.144 00:09:39.144 Latency(us) 00:09:39.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.144 =================================================================================================================== 00:09:39.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3990858 00:09:39.144 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:39.403 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:39.662 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:39.662 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:39.940 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:39.940 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:39.940 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3986461 00:09:39.940 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3986461 00:09:40.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3986461 Killed "${NVMF_APP[@]}" "$@" 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3993400 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3993400 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3993400 ']' 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.199 11:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.199 [2024-07-25 11:56:17.312903] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:40.199 [2024-07-25 11:56:17.312961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.199 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.199 [2024-07-25 11:56:17.398679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.199 [2024-07-25 11:56:17.486910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.199 [2024-07-25 11:56:17.486952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.199 [2024-07-25 11:56:17.486963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.199 [2024-07-25 11:56:17.486971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.199 [2024-07-25 11:56:17.486979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.199 [2024-07-25 11:56:17.487001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.576 [2024-07-25 11:56:18.683761] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:41.576 [2024-07-25 11:56:18.683864] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:41.576 [2024-07-25 11:56:18.683902] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6ed16f33-a35f-476e-b9fd-a10b3f5ea7af 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6ed16f33-a35f-476e-b9fd-a10b3f5ea7af 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.576 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.835 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6ed16f33-a35f-476e-b9fd-a10b3f5ea7af -t 2000 00:09:42.404 [ 00:09:42.404 { 00:09:42.404 "name": "6ed16f33-a35f-476e-b9fd-a10b3f5ea7af", 00:09:42.404 "aliases": [ 00:09:42.404 "lvs/lvol" 00:09:42.404 ], 00:09:42.404 "product_name": "Logical Volume", 00:09:42.404 "block_size": 4096, 00:09:42.404 "num_blocks": 38912, 00:09:42.404 "uuid": "6ed16f33-a35f-476e-b9fd-a10b3f5ea7af", 00:09:42.404 "assigned_rate_limits": { 00:09:42.404 "rw_ios_per_sec": 0, 00:09:42.404 "rw_mbytes_per_sec": 0, 00:09:42.404 "r_mbytes_per_sec": 0, 00:09:42.404 "w_mbytes_per_sec": 0 00:09:42.404 }, 00:09:42.404 "claimed": false, 00:09:42.404 "zoned": false, 00:09:42.404 "supported_io_types": { 00:09:42.404 "read": true, 00:09:42.404 "write": true, 00:09:42.404 "unmap": true, 00:09:42.404 "flush": false, 00:09:42.404 "reset": true, 00:09:42.404 "nvme_admin": false, 00:09:42.404 "nvme_io": false, 00:09:42.404 "nvme_io_md": false, 00:09:42.404 "write_zeroes": true, 00:09:42.404 "zcopy": false, 00:09:42.404 "get_zone_info": false, 00:09:42.404 "zone_management": false, 00:09:42.404 "zone_append": false, 00:09:42.404 "compare": false, 00:09:42.404 "compare_and_write": false, 00:09:42.404 "abort": false, 00:09:42.404 "seek_hole": true, 00:09:42.404 "seek_data": true, 00:09:42.404 "copy": false, 00:09:42.404 "nvme_iov_md": false 00:09:42.404 }, 00:09:42.404 "driver_specific": { 00:09:42.404 "lvol": { 00:09:42.404 "lvol_store_uuid": "c6b5f444-6425-4803-ae90-43eefa42b37d", 00:09:42.404 "base_bdev": "aio_bdev", 00:09:42.404 "thin_provision": false, 00:09:42.404 "num_allocated_clusters": 38, 00:09:42.404 "snapshot": false, 00:09:42.404 "clone": false, 00:09:42.404 "esnap_clone": false 00:09:42.404 } 00:09:42.404 } 00:09:42.404 } 00:09:42.404 ] 00:09:42.404 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:42.404 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:42.404 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:42.404 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:42.663 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:42.663 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:42.663 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:42.663 11:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.922 [2024-07-25 11:56:20.173427] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.922 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.923 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.923 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:42.923 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:43.182 request: 00:09:43.182 { 00:09:43.182 "uuid": "c6b5f444-6425-4803-ae90-43eefa42b37d", 00:09:43.182 "method": "bdev_lvol_get_lvstores", 00:09:43.182 "req_id": 1 00:09:43.182 } 00:09:43.182 Got JSON-RPC error response 00:09:43.182 response: 00:09:43.182 { 00:09:43.182 "code": -19, 00:09:43.182 "message": "No such device" 00:09:43.182 } 00:09:43.182 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:43.182 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.182 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.182 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.182 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.441 aio_bdev 00:09:43.441 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6ed16f33-a35f-476e-b9fd-a10b3f5ea7af 00:09:43.441 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6ed16f33-a35f-476e-b9fd-a10b3f5ea7af 00:09:43.441 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.441 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:43.441 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.441 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.441 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.700 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6ed16f33-a35f-476e-b9fd-a10b3f5ea7af -t 2000 00:09:43.959 [ 00:09:43.959 { 00:09:43.959 "name": "6ed16f33-a35f-476e-b9fd-a10b3f5ea7af", 00:09:43.959 "aliases": [ 00:09:43.959 "lvs/lvol" 00:09:43.959 ], 00:09:43.959 "product_name": "Logical Volume", 00:09:43.959 "block_size": 4096, 00:09:43.959 "num_blocks": 38912, 00:09:43.959 "uuid": "6ed16f33-a35f-476e-b9fd-a10b3f5ea7af", 00:09:43.959 "assigned_rate_limits": { 00:09:43.959 "rw_ios_per_sec": 0, 00:09:43.959 "rw_mbytes_per_sec": 0, 00:09:43.959 "r_mbytes_per_sec": 0, 00:09:43.959 "w_mbytes_per_sec": 0 00:09:43.959 }, 00:09:43.959 "claimed": false, 00:09:43.959 "zoned": false, 00:09:43.959 "supported_io_types": { 00:09:43.959 "read": true, 00:09:43.959 "write": true, 00:09:43.959 "unmap": true, 00:09:43.959 "flush": false, 00:09:43.959 "reset": true, 00:09:43.959 "nvme_admin": false, 00:09:43.959 "nvme_io": false, 00:09:43.959 "nvme_io_md": false, 00:09:43.959 "write_zeroes": true, 00:09:43.959 "zcopy": false, 00:09:43.959 "get_zone_info": false, 00:09:43.959 "zone_management": false, 00:09:43.959 "zone_append": false, 00:09:43.959 "compare": false, 00:09:43.959 "compare_and_write": false, 00:09:43.959 "abort": false, 00:09:43.959 "seek_hole": true, 00:09:43.959 "seek_data": true, 00:09:43.959 "copy": false, 00:09:43.959 "nvme_iov_md": false 00:09:43.959 }, 00:09:43.959 "driver_specific": { 00:09:43.959 "lvol": { 00:09:43.959 "lvol_store_uuid": "c6b5f444-6425-4803-ae90-43eefa42b37d", 00:09:43.959 "base_bdev": "aio_bdev", 00:09:43.959 "thin_provision": false, 00:09:43.959 "num_allocated_clusters": 38, 00:09:43.959 "snapshot": false, 00:09:43.959 "clone": false, 00:09:43.959 "esnap_clone": false 00:09:43.959 } 00:09:43.959 } 00:09:43.959 } 00:09:43.959 ] 00:09:43.959 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:43.959 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:43.959 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.218 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.218 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:44.218 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.478 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.478 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6ed16f33-a35f-476e-b9fd-a10b3f5ea7af 00:09:45.046 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6b5f444-6425-4803-ae90-43eefa42b37d 00:09:45.304 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.872 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.872 00:09:45.872 real 0m22.813s 00:09:45.872 user 0m58.085s 00:09:45.872 sys 0m4.100s 00:09:45.872 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.872 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.872 ************************************ 00:09:45.872 END TEST lvs_grow_dirty 00:09:45.872 ************************************ 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:45.872 nvmf_trace.0 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.872 rmmod nvme_tcp 00:09:45.872 rmmod nvme_fabrics 00:09:45.872 rmmod nvme_keyring 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3993400 ']' 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3993400 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3993400 ']' 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3993400 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:45.872 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.873 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3993400 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3993400' 00:09:46.132 killing process with pid 3993400 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3993400 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3993400 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.132 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.666 00:09:48.666 real 0m52.201s 00:09:48.666 user 1m26.237s 00:09:48.666 sys 0m10.905s 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:48.666 ************************************ 00:09:48.666 END TEST nvmf_lvs_grow 00:09:48.666 ************************************ 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.666 ************************************ 00:09:48.666 START TEST nvmf_bdev_io_wait 00:09:48.666 ************************************ 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:48.666 * Looking for test storage... 00:09:48.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.666 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.667 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:53.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:53.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:53.946 Found net devices under 0000:af:00.0: cvl_0_0 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:53.946 Found net devices under 0000:af:00.1: cvl_0_1 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.946 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.947 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.947 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.947 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.947 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.947 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.947 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.205 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.205 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.205 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.205 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.205 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.205 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:09:54.464 00:09:54.464 --- 10.0.0.2 ping statistics --- 00:09:54.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.464 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:09:54.464 00:09:54.464 --- 10.0.0.1 ping statistics --- 00:09:54.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.464 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3998158 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3998158 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3998158 ']' 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.464 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.465 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.465 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.465 [2024-07-25 11:56:31.628610] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:54.465 [2024-07-25 11:56:31.628675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.465 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.465 [2024-07-25 11:56:31.717403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.723 [2024-07-25 11:56:31.812599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.723 [2024-07-25 11:56:31.812649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.723 [2024-07-25 11:56:31.812659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.723 [2024-07-25 11:56:31.812668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.723 [2024-07-25 11:56:31.812676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.723 [2024-07-25 11:56:31.812728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.723 [2024-07-25 11:56:31.812839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.723 [2024-07-25 11:56:31.812951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.723 [2024-07-25 11:56:31.812952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.291 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.291 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:55.291 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:55.291 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.291 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 [2024-07-25 11:56:32.703613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 Malloc0 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.550 [2024-07-25 11:56:32.772966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3998292 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3998295 00:09:55.550 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:55.551 { 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme$subsystem", 00:09:55.551 "trtype": "$TEST_TRANSPORT", 00:09:55.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "$NVMF_PORT", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.551 "hdgst": ${hdgst:-false}, 00:09:55.551 "ddgst": ${ddgst:-false} 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 } 00:09:55.551 EOF 00:09:55.551 )") 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3998298 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:55.551 { 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme$subsystem", 00:09:55.551 "trtype": "$TEST_TRANSPORT", 00:09:55.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "$NVMF_PORT", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.551 "hdgst": ${hdgst:-false}, 00:09:55.551 "ddgst": ${ddgst:-false} 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 } 00:09:55.551 EOF 00:09:55.551 )") 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3998302 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:55.551 { 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme$subsystem", 00:09:55.551 "trtype": "$TEST_TRANSPORT", 00:09:55.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "$NVMF_PORT", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.551 "hdgst": ${hdgst:-false}, 00:09:55.551 "ddgst": ${ddgst:-false} 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 } 00:09:55.551 EOF 00:09:55.551 )") 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:55.551 { 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme$subsystem", 00:09:55.551 "trtype": "$TEST_TRANSPORT", 00:09:55.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "$NVMF_PORT", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.551 "hdgst": ${hdgst:-false}, 00:09:55.551 "ddgst": ${ddgst:-false} 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 } 00:09:55.551 EOF 00:09:55.551 )") 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3998292 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme1", 00:09:55.551 "trtype": "tcp", 00:09:55.551 "traddr": "10.0.0.2", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "4420", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.551 "hdgst": false, 00:09:55.551 "ddgst": false 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 }' 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme1", 00:09:55.551 "trtype": "tcp", 00:09:55.551 "traddr": "10.0.0.2", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "4420", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.551 "hdgst": false, 00:09:55.551 "ddgst": false 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 }' 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme1", 00:09:55.551 "trtype": "tcp", 00:09:55.551 "traddr": "10.0.0.2", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "4420", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.551 "hdgst": false, 00:09:55.551 "ddgst": false 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 }' 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:55.551 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:55.551 "params": { 00:09:55.551 "name": "Nvme1", 00:09:55.551 "trtype": "tcp", 00:09:55.551 "traddr": "10.0.0.2", 00:09:55.551 "adrfam": "ipv4", 00:09:55.551 "trsvcid": "4420", 00:09:55.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.551 "hdgst": false, 00:09:55.551 "ddgst": false 00:09:55.551 }, 00:09:55.551 "method": "bdev_nvme_attach_controller" 00:09:55.551 }' 00:09:55.551 [2024-07-25 11:56:32.828114] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:55.551 [2024-07-25 11:56:32.828179] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:55.551 [2024-07-25 11:56:32.828761] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:55.551 [2024-07-25 11:56:32.828816] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:55.551 [2024-07-25 11:56:32.830389] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:55.551 [2024-07-25 11:56:32.830442] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:55.551 [2024-07-25 11:56:32.830563] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:55.551 [2024-07-25 11:56:32.830627] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:55.810 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.810 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.810 [2024-07-25 11:56:33.053905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.810 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.069 [2024-07-25 11:56:33.115951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.069 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.069 [2024-07-25 11:56:33.195200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:56.069 [2024-07-25 11:56:33.206417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:56.069 [2024-07-25 11:56:33.215676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.069 [2024-07-25 11:56:33.268391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.069 [2024-07-25 11:56:33.322408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:56.069 [2024-07-25 11:56:33.357362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:56.326 Running I/O for 1 seconds... 00:09:56.326 Running I/O for 1 seconds... 00:09:56.326 Running I/O for 1 seconds... 00:09:56.584 Running I/O for 1 seconds... 00:09:57.522 00:09:57.522 Latency(us) 00:09:57.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.522 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:57.522 Nvme1n1 : 1.01 5088.92 19.88 0.00 0.00 24961.35 7536.64 36461.85 00:09:57.522 =================================================================================================================== 00:09:57.522 Total : 5088.92 19.88 0.00 0.00 24961.35 7536.64 36461.85 00:09:57.522 00:09:57.522 Latency(us) 00:09:57.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.522 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:57.522 Nvme1n1 : 1.02 4856.42 18.97 0.00 0.00 26085.46 8162.21 39798.23 00:09:57.522 =================================================================================================================== 00:09:57.522 Total : 4856.42 18.97 0.00 0.00 26085.46 8162.21 39798.23 00:09:57.522 00:09:57.522 Latency(us) 00:09:57.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.522 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:57.522 Nvme1n1 : 1.00 163237.40 637.65 0.00 0.00 780.59 318.37 949.53 00:09:57.522 =================================================================================================================== 00:09:57.522 Total : 163237.40 637.65 0.00 0.00 780.59 318.37 949.53 00:09:57.522 00:09:57.522 Latency(us) 00:09:57.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.522 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:57.522 Nvme1n1 : 1.01 5062.62 19.78 0.00 0.00 25173.85 8281.37 55526.87 00:09:57.522 =================================================================================================================== 00:09:57.522 Total : 5062.62 19.78 0.00 0.00 25173.85 8281.37 55526.87 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3998295 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3998298 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3998302 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.781 rmmod nvme_tcp 00:09:57.781 rmmod nvme_fabrics 00:09:57.781 rmmod nvme_keyring 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3998158 ']' 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3998158 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3998158 ']' 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3998158 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.781 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3998158 00:09:57.781 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.781 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.781 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3998158' 00:09:57.781 killing process with pid 3998158 00:09:57.781 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3998158 00:09:57.781 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3998158 00:09:58.040 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.040 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.040 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.040 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.040 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.041 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.041 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.041 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.574 00:10:00.574 real 0m11.764s 00:10:00.574 user 0m21.484s 00:10:00.574 sys 0m6.201s 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.574 ************************************ 00:10:00.574 END TEST nvmf_bdev_io_wait 00:10:00.574 ************************************ 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.574 ************************************ 00:10:00.574 START TEST nvmf_queue_depth 00:10:00.574 ************************************ 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:00.574 * Looking for test storage... 00:10:00.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.574 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:05.849 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:05.849 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:05.849 Found net devices under 0000:af:00.0: cvl_0_0 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:05.849 Found net devices under 0000:af:00.1: cvl_0_1 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.849 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.850 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:05.850 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.850 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.850 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:05.850 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:05.850 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:10:06.110 00:10:06.110 --- 10.0.0.2 ping statistics --- 00:10:06.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.110 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:10:06.110 00:10:06.110 --- 10.0.0.1 ping statistics --- 00:10:06.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.110 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.110 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4002527 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4002527 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 4002527 ']' 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.408 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.408 [2024-07-25 11:56:43.500031] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:10:06.408 [2024-07-25 11:56:43.500090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.408 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.408 [2024-07-25 11:56:43.586695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.408 [2024-07-25 11:56:43.689119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.408 [2024-07-25 11:56:43.689169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.408 [2024-07-25 11:56:43.689182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.408 [2024-07-25 11:56:43.689193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.408 [2024-07-25 11:56:43.689203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.408 [2024-07-25 11:56:43.689229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.392 [2024-07-25 11:56:44.409126] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.392 Malloc0 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.392 [2024-07-25 11:56:44.476007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4002572 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4002572 /var/tmp/bdevperf.sock 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 4002572 ']' 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:07.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.392 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.392 [2024-07-25 11:56:44.530209] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:10:07.392 [2024-07-25 11:56:44.530272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002572 ] 00:10:07.392 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.392 [2024-07-25 11:56:44.615415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.652 [2024-07-25 11:56:44.707467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.652 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.652 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:07.652 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:07.652 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.652 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.652 NVMe0n1 00:10:07.652 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.652 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:07.911 Running I/O for 10 seconds... 00:10:20.123 00:10:20.124 Latency(us) 00:10:20.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.124 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:20.124 Verification LBA range: start 0x0 length 0x4000 00:10:20.124 NVMe0n1 : 10.14 6490.75 25.35 0.00 0.00 156146.43 29550.78 95325.09 00:10:20.124 =================================================================================================================== 00:10:20.124 Total : 6490.75 25.35 0.00 0.00 156146.43 29550.78 95325.09 00:10:20.124 0 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4002572 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 4002572 ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 4002572 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4002572 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4002572' 00:10:20.124 killing process with pid 4002572 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 4002572 00:10:20.124 Received shutdown signal, test time was about 10.000000 seconds 00:10:20.124 00:10:20.124 Latency(us) 00:10:20.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.124 =================================================================================================================== 00:10:20.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 4002572 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.124 rmmod nvme_tcp 00:10:20.124 rmmod nvme_fabrics 00:10:20.124 rmmod nvme_keyring 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4002527 ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4002527 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 4002527 ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 4002527 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4002527 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4002527' 00:10:20.124 killing process with pid 4002527 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 4002527 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 4002527 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.124 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:20.693 00:10:20.693 real 0m20.507s 00:10:20.693 user 0m24.421s 00:10:20.693 sys 0m5.900s 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.693 ************************************ 00:10:20.693 END TEST nvmf_queue_depth 00:10:20.693 ************************************ 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.693 ************************************ 00:10:20.693 START TEST nvmf_target_multipath 00:10:20.693 ************************************ 00:10:20.693 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.953 * Looking for test storage... 00:10:20.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.953 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.954 11:56:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:27.529 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:27.529 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:27.529 Found net devices under 0000:af:00.0: cvl_0_0 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.529 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:27.530 Found net devices under 0000:af:00.1: cvl_0_1 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:27.530 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:27.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:10:27.530 00:10:27.530 --- 10.0.0.2 ping statistics --- 00:10:27.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.530 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:10:27.530 00:10:27.530 --- 10.0.0.1 ping statistics --- 00:10:27.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.530 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:27.530 only one NIC for nvmf test 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.530 rmmod nvme_tcp 00:10:27.530 rmmod nvme_fabrics 00:10:27.530 rmmod nvme_keyring 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.530 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.440 00:10:29.440 real 0m8.318s 00:10:29.440 user 0m1.713s 00:10:29.440 sys 0m4.582s 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:29.440 ************************************ 00:10:29.440 END TEST nvmf_target_multipath 00:10:29.440 ************************************ 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.440 ************************************ 00:10:29.440 START TEST nvmf_zcopy 00:10:29.440 ************************************ 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.440 * Looking for test storage... 00:10:29.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.440 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.441 11:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:36.044 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:36.044 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:36.044 Found net devices under 0000:af:00.0: cvl_0_0 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:36.044 Found net devices under 0000:af:00.1: cvl_0_1 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.044 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:10:36.045 00:10:36.045 --- 10.0.0.2 ping statistics --- 00:10:36.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.045 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:10:36.045 00:10:36.045 --- 10.0.0.1 ping statistics --- 00:10:36.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.045 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4012362 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4012362 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 4012362 ']' 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.045 11:57:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.045 [2024-07-25 11:57:12.600848] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:10:36.045 [2024-07-25 11:57:12.600913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.045 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.045 [2024-07-25 11:57:12.690245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.045 [2024-07-25 11:57:12.791408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.045 [2024-07-25 11:57:12.791459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.045 [2024-07-25 11:57:12.791472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.045 [2024-07-25 11:57:12.791483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.045 [2024-07-25 11:57:12.791493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.045 [2024-07-25 11:57:12.791518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.303 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 [2024-07-25 11:57:13.606951] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 [2024-07-25 11:57:13.627165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.561 malloc0 00:10:36.561 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:36.562 { 00:10:36.562 "params": { 00:10:36.562 "name": "Nvme$subsystem", 00:10:36.562 "trtype": "$TEST_TRANSPORT", 00:10:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:36.562 "adrfam": "ipv4", 00:10:36.562 "trsvcid": "$NVMF_PORT", 00:10:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:36.562 "hdgst": ${hdgst:-false}, 00:10:36.562 "ddgst": ${ddgst:-false} 00:10:36.562 }, 00:10:36.562 "method": "bdev_nvme_attach_controller" 00:10:36.562 } 00:10:36.562 EOF 00:10:36.562 )") 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:36.562 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:36.562 "params": { 00:10:36.562 "name": "Nvme1", 00:10:36.562 "trtype": "tcp", 00:10:36.562 "traddr": "10.0.0.2", 00:10:36.562 "adrfam": "ipv4", 00:10:36.562 "trsvcid": "4420", 00:10:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:36.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:36.562 "hdgst": false, 00:10:36.562 "ddgst": false 00:10:36.562 }, 00:10:36.562 "method": "bdev_nvme_attach_controller" 00:10:36.562 }' 00:10:36.562 [2024-07-25 11:57:13.730595] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:10:36.562 [2024-07-25 11:57:13.730667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012643 ] 00:10:36.562 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.562 [2024-07-25 11:57:13.814989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.820 [2024-07-25 11:57:13.906785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.820 Running I/O for 10 seconds... 00:10:49.031 00:10:49.031 Latency(us) 00:10:49.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.031 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:49.031 Verification LBA range: start 0x0 length 0x1000 00:10:49.031 Nvme1n1 : 10.02 4481.05 35.01 0.00 0.00 28482.16 521.31 37415.10 00:10:49.031 =================================================================================================================== 00:10:49.031 Total : 4481.05 35.01 0.00 0.00 28482.16 521.31 37415.10 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4014502 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:49.031 { 00:10:49.031 "params": { 00:10:49.031 "name": "Nvme$subsystem", 00:10:49.031 "trtype": "$TEST_TRANSPORT", 00:10:49.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.031 "adrfam": "ipv4", 00:10:49.031 "trsvcid": "$NVMF_PORT", 00:10:49.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.031 "hdgst": ${hdgst:-false}, 00:10:49.031 "ddgst": ${ddgst:-false} 00:10:49.031 }, 00:10:49.031 "method": "bdev_nvme_attach_controller" 00:10:49.031 } 00:10:49.031 EOF 00:10:49.031 )") 00:10:49.031 [2024-07-25 11:57:24.364112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.364158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:49.031 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:49.031 "params": { 00:10:49.031 "name": "Nvme1", 00:10:49.031 "trtype": "tcp", 00:10:49.031 "traddr": "10.0.0.2", 00:10:49.031 "adrfam": "ipv4", 00:10:49.031 "trsvcid": "4420", 00:10:49.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.031 "hdgst": false, 00:10:49.031 "ddgst": false 00:10:49.031 }, 00:10:49.031 "method": "bdev_nvme_attach_controller" 00:10:49.031 }' 00:10:49.031 [2024-07-25 11:57:24.376110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.376131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.384128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.384145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.392151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.392168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.400177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.400195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.412219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.412237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.424249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.424267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.436287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.436304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.438503] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:10:49.031 [2024-07-25 11:57:24.438627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014502 ] 00:10:49.031 [2024-07-25 11:57:24.448323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.448340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.460356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.460373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.472391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.472414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.484442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.484459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.496462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.496488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.031 [2024-07-25 11:57:24.508498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.508516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.520536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.520554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.528555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.528572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.540596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.540621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.548626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.548643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.555397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.031 [2024-07-25 11:57:24.556651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.556668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.568690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.568711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.580722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.580739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.592755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.592772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.604793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.604819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.616828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.616849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.628857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.628874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.640892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.640909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.649863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.031 [2024-07-25 11:57:24.652929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.652953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.031 [2024-07-25 11:57:24.664968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.031 [2024-07-25 11:57:24.664990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.677004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.677035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.689036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.689056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.701070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.701087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.713099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.713118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.725136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.725153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.737171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.737187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.749260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.749289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.761250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.761272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.773279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.773300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.785313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.785329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.797348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.797366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.809417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.809442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.821431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.821453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.833459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.833475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.845502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.845518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.857540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.857556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.869578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.869601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.881620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.881637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.893649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.893665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.905692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.905712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.917725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.917745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.929756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.929773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.941797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.941813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.953830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.953848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.965883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.965910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 Running I/O for 5 seconds... 00:10:49.032 [2024-07-25 11:57:24.977911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.977928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:24.995424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:24.995453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.012468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.012496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.031928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.031958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.050295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.050323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.068191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.068220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.086990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.087018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.105076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.105105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.123281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.123310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.142266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.142294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.159875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.159903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.178735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.178764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.197883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.197912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.217191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.217220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.236393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.236421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.256753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.256782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.273350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.273378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.285684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.285713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.300898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.300927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.313558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.313586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.326171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.326198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.339084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.339112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.356765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.356793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.374873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.374901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.393130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.393159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.412251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.412280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.430399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.430427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.450599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.032 [2024-07-25 11:57:25.450636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.032 [2024-07-25 11:57:25.469739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.469767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.487890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.487918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.507153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.507181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.526364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.526392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.544350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.544379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.563108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.563137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.581224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.581253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.600473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.600502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.618509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.618537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.637737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.637765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.656037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.656065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.675176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.675205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.693571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.693599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.712501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.712529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.731756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.731784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.749614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.749643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.767730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.767759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.785568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.785597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.804555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.804584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.822770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.822800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.841941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.841970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.860325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.860355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.878401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.878435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.897356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.897384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.915350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.915379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.934485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.934514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.953447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.953476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.969948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.969976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:25.988972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:25.989001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.006938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.006967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.024563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.024592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.043843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.043872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.060843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.060871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.077823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.077851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.090193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.090221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.105241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.105270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.117266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.117295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.134377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.134405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.152347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.152375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.171550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.171579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.188385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.188414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.207296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.207329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.225337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.225365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.244229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.244256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.262065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.262093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.280426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.280456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.299841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.299870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.033 [2024-07-25 11:57:26.319339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.033 [2024-07-25 11:57:26.319368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.337655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.337684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.356857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.356885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.375992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.376019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.394903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.394931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.413041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.413069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.432250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.432278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.451355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.451383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.469148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.469176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.488102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.488130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.506261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.506289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.526616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.526645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.543333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.543361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.555474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.555507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.569493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.569521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.293 [2024-07-25 11:57:26.583747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.293 [2024-07-25 11:57:26.583775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.552 [2024-07-25 11:57:26.600855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.552 [2024-07-25 11:57:26.600885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.552 [2024-07-25 11:57:26.617600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.552 [2024-07-25 11:57:26.617636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.552 [2024-07-25 11:57:26.635449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.552 [2024-07-25 11:57:26.635478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.552 [2024-07-25 11:57:26.652323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.552 [2024-07-25 11:57:26.652350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.552 [2024-07-25 11:57:26.671416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.552 [2024-07-25 11:57:26.671444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.552 [2024-07-25 11:57:26.684214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.552 [2024-07-25 11:57:26.684243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.552 [2024-07-25 11:57:26.699617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.699645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.716964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.716993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.733933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.733961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.751944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.751973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.770973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.771001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.790184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.790213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.808686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.808714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.826752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.826781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.553 [2024-07-25 11:57:26.845778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.553 [2024-07-25 11:57:26.845807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.863870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.863898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.880724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.880758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.899267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.899295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.916394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.916422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.935295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.935322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.954611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.954638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.973711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.973740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:26.990726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:26.990754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.003207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.003235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.017246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.017273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.031459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.031486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.046159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.046187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.063231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.063259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.080033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.080061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.092855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.092883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.813 [2024-07-25 11:57:27.107905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.813 [2024-07-25 11:57:27.107934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.122338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.122367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.139687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.139716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.156506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.156534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.174635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.174663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.192835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.192863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.210658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.210685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.229714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.229743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.248644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.248673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.267879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.267908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.285784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.285812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.304845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.304873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.324068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.324097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.341146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.341174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.073 [2024-07-25 11:57:27.360089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.073 [2024-07-25 11:57:27.360117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.378484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.378513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.396735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.396764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.415034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.415062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.433076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.433104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.451489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.451517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.470688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.470716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.489009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.489038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.507725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.507754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.524541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.524571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.537363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.537392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.552770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.552798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.570070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.570098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.332 [2024-07-25 11:57:27.586939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.332 [2024-07-25 11:57:27.586968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.333 [2024-07-25 11:57:27.599311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.333 [2024-07-25 11:57:27.599339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.333 [2024-07-25 11:57:27.613267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.333 [2024-07-25 11:57:27.613295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.333 [2024-07-25 11:57:27.630663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.333 [2024-07-25 11:57:27.630691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.649827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.649856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.667872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.667900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.685834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.685863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.704988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.705017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.724099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.724127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.740947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.740976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.752937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.752964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.767001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.767028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.781266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.781294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.798525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.798552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.815552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.815580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.828143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.828172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.843649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.843677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.857650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.857679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.592 [2024-07-25 11:57:27.871958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.592 [2024-07-25 11:57:27.871986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.593 [2024-07-25 11:57:27.886611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.593 [2024-07-25 11:57:27.886639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:27.903938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:27.903967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:27.921873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:27.921901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:27.940025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:27.940053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:27.957679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:27.957707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:27.976796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:27.976824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:27.996304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:27.996332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.014617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.014645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.033981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.034010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.051920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.051948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.071092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.071121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.089185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.089213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.106806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.106834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.125595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.125632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.852 [2024-07-25 11:57:28.144173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.852 [2024-07-25 11:57:28.144201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.160918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.160946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.179934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.179962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.196454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.196482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.215231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.215259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.234564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.234592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.254112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.254140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.272357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.272384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.289363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.289391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.302281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.302310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.317644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.317672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.335293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.335321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.354484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.354512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.372312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.372339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.389226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.389254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.118 [2024-07-25 11:57:28.408224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.118 [2024-07-25 11:57:28.408252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.426090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.426119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.445347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.445375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.463240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.463269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.481179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.481207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.498911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.498945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.518043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.518072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.535836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.535864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.553970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.553998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.571572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.571600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.590641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.590669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.608766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.608795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.626611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.626638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.646468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.646497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.665863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.665891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.396 [2024-07-25 11:57:28.683752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.396 [2024-07-25 11:57:28.683780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.655 [2024-07-25 11:57:28.702924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.655 [2024-07-25 11:57:28.702954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.655 [2024-07-25 11:57:28.720917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.655 [2024-07-25 11:57:28.720947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.655 [2024-07-25 11:57:28.738965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.655 [2024-07-25 11:57:28.738995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.655 [2024-07-25 11:57:28.757030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.655 [2024-07-25 11:57:28.757058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.655 [2024-07-25 11:57:28.776253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.776281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.794659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.794688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.812651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.812681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.832064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.832093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.851364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.851399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.868046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.868076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.886667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.886696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.903900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.903928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.923320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.923349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.656 [2024-07-25 11:57:28.942652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.656 [2024-07-25 11:57:28.942682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:28.960636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:28.960665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:28.978821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:28.978849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:28.996547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:28.996575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.014818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.014847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.033744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.033772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.052675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.052704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.071750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.071791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.089769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.089797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.107911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.107939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.126777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.126806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.146396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.915 [2024-07-25 11:57:29.146425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.915 [2024-07-25 11:57:29.164483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.916 [2024-07-25 11:57:29.164511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.916 [2024-07-25 11:57:29.182707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.916 [2024-07-25 11:57:29.182735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.916 [2024-07-25 11:57:29.202004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.916 [2024-07-25 11:57:29.202037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.221646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.221675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.240990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.241017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.259036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.259063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.276724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.276752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.294714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.294741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.313391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.313420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.331704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.331732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.349467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.349494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.368223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.368251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.175 [2024-07-25 11:57:29.387470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.175 [2024-07-25 11:57:29.387498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.176 [2024-07-25 11:57:29.406796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.176 [2024-07-25 11:57:29.406824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.176 [2024-07-25 11:57:29.420381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.176 [2024-07-25 11:57:29.420408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.176 [2024-07-25 11:57:29.438826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.176 [2024-07-25 11:57:29.438854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.176 [2024-07-25 11:57:29.458109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.176 [2024-07-25 11:57:29.458138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.176 [2024-07-25 11:57:29.476219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.176 [2024-07-25 11:57:29.476250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.495564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.495593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.509172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.509199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.526704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.526732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.543427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.543461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.561426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.561454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.579631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.579659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.597493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.597521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.616770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.616798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.633642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.633671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.646290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.646318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.661454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.661482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.678515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.678543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.695670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.695698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.708166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.708194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.722159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.722187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.436 [2024-07-25 11:57:29.736512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.436 [2024-07-25 11:57:29.736539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.753859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.753889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.773203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.773231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.791376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.791404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.808072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.808101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.827123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.827152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.845158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.845186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.864029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.864057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.882001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.882030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.901157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.901187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.919323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.919351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.936924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.936952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.954675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.954703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.972592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.695 [2024-07-25 11:57:29.972626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.695 [2024-07-25 11:57:29.989702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.696 [2024-07-25 11:57:29.989731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 00:10:52.955 Latency(us) 00:10:52.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.955 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:52.955 Nvme1n1 : 5.01 8810.54 68.83 0.00 0.00 14508.30 6285.50 26929.34 00:10:52.955 =================================================================================================================== 00:10:52.955 Total : 8810.54 68.83 0.00 0.00 14508.30 6285.50 26929.34 00:10:52.955 [2024-07-25 11:57:30.001858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.001884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.013891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.013916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.026012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.026047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.037961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.037986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.049994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.050016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.062028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.062050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.074062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.074082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.086097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.086116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.098129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.098150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.110224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.110299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.122201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.122223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.134235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.134253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.146272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.146293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.158296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.158313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.170336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.170355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.182369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.182388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.194404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.194421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 [2024-07-25 11:57:30.206438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.955 [2024-07-25 11:57:30.206455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4014502) - No such process 00:10:52.955 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4014502 00:10:52.955 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.955 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.955 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.955 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.955 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.956 delay0 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.956 11:57:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:53.214 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.214 [2024-07-25 11:57:30.343139] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:59.782 Initializing NVMe Controllers 00:10:59.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:59.782 Initialization complete. Launching workers. 00:10:59.782 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 185 00:10:59.782 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 452, failed to submit 53 00:10:59.782 success 307, unsuccess 145, failed 0 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.782 rmmod nvme_tcp 00:10:59.782 rmmod nvme_fabrics 00:10:59.782 rmmod nvme_keyring 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4012362 ']' 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4012362 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 4012362 ']' 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 4012362 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4012362 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4012362' 00:10:59.782 killing process with pid 4012362 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 4012362 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 4012362 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.782 11:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.689 11:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.689 00:11:01.689 real 0m32.568s 00:11:01.689 user 0m44.465s 00:11:01.689 sys 0m10.028s 00:11:01.689 11:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.689 11:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.689 ************************************ 00:11:01.689 END TEST nvmf_zcopy 00:11:01.689 ************************************ 00:11:01.689 11:57:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:01.689 11:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.689 11:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.689 11:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.949 ************************************ 00:11:01.949 START TEST nvmf_nmic 00:11:01.949 ************************************ 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:01.949 * Looking for test storage... 00:11:01.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.949 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.950 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.524 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.524 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.524 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:08.525 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:08.525 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:08.525 Found net devices under 0000:af:00.0: cvl_0_0 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:08.525 Found net devices under 0000:af:00.1: cvl_0_1 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.525 11:57:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:11:08.525 00:11:08.525 --- 10.0.0.2 ping statistics --- 00:11:08.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.525 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:11:08.525 00:11:08.525 --- 10.0.0.1 ping statistics --- 00:11:08.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.525 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4020470 00:11:08.525 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4020470 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 4020470 ']' 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 [2024-07-25 11:57:45.233819] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:11:08.526 [2024-07-25 11:57:45.233875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.526 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.526 [2024-07-25 11:57:45.320338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.526 [2024-07-25 11:57:45.409665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.526 [2024-07-25 11:57:45.409710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.526 [2024-07-25 11:57:45.409720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.526 [2024-07-25 11:57:45.409729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.526 [2024-07-25 11:57:45.409736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.526 [2024-07-25 11:57:45.409795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.526 [2024-07-25 11:57:45.409908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.526 [2024-07-25 11:57:45.410018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.526 [2024-07-25 11:57:45.410019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 [2024-07-25 11:57:45.580863] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 Malloc0 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 [2024-07-25 11:57:45.640954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:08.526 test case1: single bdev can't be used in multiple subsystems 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 [2024-07-25 11:57:45.664823] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:08.526 [2024-07-25 11:57:45.664847] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:08.526 [2024-07-25 11:57:45.664857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.526 request: 00:11:08.526 { 00:11:08.526 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:08.526 "namespace": { 00:11:08.526 "bdev_name": "Malloc0", 00:11:08.526 "no_auto_visible": false 00:11:08.526 }, 00:11:08.526 "method": "nvmf_subsystem_add_ns", 00:11:08.526 "req_id": 1 00:11:08.526 } 00:11:08.526 Got JSON-RPC error response 00:11:08.526 response: 00:11:08.526 { 00:11:08.526 "code": -32602, 00:11:08.526 "message": "Invalid parameters" 00:11:08.526 } 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:08.526 Adding namespace failed - expected result. 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:08.526 test case2: host connect to nvmf target in multiple paths 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.526 [2024-07-25 11:57:45.677030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.526 11:57:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.939 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:11.318 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.318 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:11.318 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.318 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:11.318 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:13.224 11:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:13.224 11:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:13.224 11:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.224 11:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:13.224 11:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.224 11:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:13.224 11:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:13.224 [global] 00:11:13.224 thread=1 00:11:13.224 invalidate=1 00:11:13.224 rw=write 00:11:13.224 time_based=1 00:11:13.224 runtime=1 00:11:13.224 ioengine=libaio 00:11:13.224 direct=1 00:11:13.224 bs=4096 00:11:13.224 iodepth=1 00:11:13.224 norandommap=0 00:11:13.224 numjobs=1 00:11:13.224 00:11:13.224 verify_dump=1 00:11:13.224 verify_backlog=512 00:11:13.224 verify_state_save=0 00:11:13.224 do_verify=1 00:11:13.224 verify=crc32c-intel 00:11:13.224 [job0] 00:11:13.224 filename=/dev/nvme0n1 00:11:13.224 Could not set queue depth (nvme0n1) 00:11:13.794 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.794 fio-3.35 00:11:13.794 Starting 1 thread 00:11:15.170 00:11:15.170 job0: (groupid=0, jobs=1): err= 0: pid=4021530: Thu Jul 25 11:57:52 2024 00:11:15.170 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:11:15.170 slat (nsec): min=10606, max=23418, avg=20688.10, stdev=2817.91 00:11:15.170 clat (usec): min=40881, max=45143, avg=41279.72, stdev=925.00 00:11:15.170 lat (usec): min=40902, max=45159, avg=41300.41, stdev=923.60 00:11:15.170 clat percentiles (usec): 00:11:15.170 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:15.170 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:15.170 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:11:15.170 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:11:15.170 | 99.99th=[45351] 00:11:15.170 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:15.170 slat (nsec): min=10308, max=45075, avg=11824.31, stdev=2148.09 00:11:15.170 clat (usec): min=293, max=1891, avg=319.58, stdev=71.09 00:11:15.170 lat (usec): min=303, max=1902, avg=331.40, stdev=71.32 00:11:15.170 clat percentiles (usec): 00:11:15.170 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 306], 20.00th=[ 310], 00:11:15.170 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 314], 60.00th=[ 318], 00:11:15.170 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 326], 95.00th=[ 330], 00:11:15.170 | 99.00th=[ 338], 99.50th=[ 457], 99.90th=[ 1893], 99.95th=[ 1893], 00:11:15.170 | 99.99th=[ 1893] 00:11:15.170 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.170 lat (usec) : 500=95.68%, 750=0.19% 00:11:15.170 lat (msec) : 2=0.19%, 50=3.94% 00:11:15.170 cpu : usr=0.19%, sys=1.16%, ctx=533, majf=0, minf=2 00:11:15.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.170 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.170 00:11:15.170 Run status group 0 (all jobs): 00:11:15.170 READ: bw=80.8KiB/s (82.8kB/s), 80.8KiB/s-80.8KiB/s (82.8kB/s-82.8kB/s), io=84.0KiB (86.0kB), run=1039-1039msec 00:11:15.170 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:11:15.170 00:11:15.170 Disk stats (read/write): 00:11:15.170 nvme0n1: ios=67/512, merge=0/0, ticks=970/155, in_queue=1125, util=96.99% 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:15.170 rmmod nvme_tcp 00:11:15.170 rmmod nvme_fabrics 00:11:15.170 rmmod nvme_keyring 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4020470 ']' 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4020470 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 4020470 ']' 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 4020470 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4020470 00:11:15.170 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.171 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.171 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4020470' 00:11:15.171 killing process with pid 4020470 00:11:15.171 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 4020470 00:11:15.171 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 4020470 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.430 11:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.964 00:11:17.964 real 0m15.713s 00:11:17.964 user 0m41.726s 00:11:17.964 sys 0m5.420s 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.964 ************************************ 00:11:17.964 END TEST nvmf_nmic 00:11:17.964 ************************************ 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.964 ************************************ 00:11:17.964 START TEST nvmf_fio_target 00:11:17.964 ************************************ 00:11:17.964 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.964 * Looking for test storage... 00:11:17.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.965 11:57:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:24.534 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.534 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:24.535 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:24.535 Found net devices under 0000:af:00.0: cvl_0_0 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:24.535 Found net devices under 0000:af:00.1: cvl_0_1 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:11:24.535 00:11:24.535 --- 10.0.0.2 ping statistics --- 00:11:24.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.535 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:11:24.535 00:11:24.535 --- 10.0.0.1 ping statistics --- 00:11:24.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.535 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4025495 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4025495 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 4025495 ']' 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.535 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.535 [2024-07-25 11:58:00.981950] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:11:24.535 [2024-07-25 11:58:00.982015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.535 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.535 [2024-07-25 11:58:01.068130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.535 [2024-07-25 11:58:01.156550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.535 [2024-07-25 11:58:01.156595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.535 [2024-07-25 11:58:01.156614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.535 [2024-07-25 11:58:01.156624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.535 [2024-07-25 11:58:01.156632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.535 [2024-07-25 11:58:01.156690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.535 [2024-07-25 11:58:01.156802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.535 [2024-07-25 11:58:01.156888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.535 [2024-07-25 11:58:01.156889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.535 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.535 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:24.535 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:24.535 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.535 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.535 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.536 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:24.536 [2024-07-25 11:58:01.552865] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.536 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.798 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:24.798 11:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.056 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:25.056 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.315 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:25.315 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.574 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:25.574 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:25.574 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.143 11:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:26.143 11:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.143 11:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:26.143 11:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.401 11:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:26.401 11:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:26.659 11:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.917 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:26.917 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.176 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:27.176 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.435 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.693 [2024-07-25 11:58:04.777872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.693 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:27.693 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:27.950 11:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.325 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:29.325 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.325 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.325 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:29.325 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:29.325 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:31.858 11:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:31.858 11:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:31.858 11:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.858 11:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:31.858 11:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.858 11:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:31.859 11:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:31.859 [global] 00:11:31.859 thread=1 00:11:31.859 invalidate=1 00:11:31.859 rw=write 00:11:31.859 time_based=1 00:11:31.859 runtime=1 00:11:31.859 ioengine=libaio 00:11:31.859 direct=1 00:11:31.859 bs=4096 00:11:31.859 iodepth=1 00:11:31.859 norandommap=0 00:11:31.859 numjobs=1 00:11:31.859 00:11:31.859 verify_dump=1 00:11:31.859 verify_backlog=512 00:11:31.859 verify_state_save=0 00:11:31.859 do_verify=1 00:11:31.859 verify=crc32c-intel 00:11:31.859 [job0] 00:11:31.859 filename=/dev/nvme0n1 00:11:31.859 [job1] 00:11:31.859 filename=/dev/nvme0n2 00:11:31.859 [job2] 00:11:31.859 filename=/dev/nvme0n3 00:11:31.859 [job3] 00:11:31.859 filename=/dev/nvme0n4 00:11:31.859 Could not set queue depth (nvme0n1) 00:11:31.859 Could not set queue depth (nvme0n2) 00:11:31.859 Could not set queue depth (nvme0n3) 00:11:31.859 Could not set queue depth (nvme0n4) 00:11:31.859 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.859 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.859 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.859 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.859 fio-3.35 00:11:31.859 Starting 4 threads 00:11:33.237 00:11:33.237 job0: (groupid=0, jobs=1): err= 0: pid=4027273: Thu Jul 25 11:58:10 2024 00:11:33.237 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:11:33.237 slat (nsec): min=9233, max=24491, avg=22683.29, stdev=3122.09 00:11:33.237 clat (usec): min=40890, max=44897, avg=41615.35, stdev=886.56 00:11:33.237 lat (usec): min=40914, max=44920, avg=41638.03, stdev=886.44 00:11:33.237 clat percentiles (usec): 00:11:33.237 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:33.237 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:33.237 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:33.237 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:33.237 | 99.99th=[44827] 00:11:33.237 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:33.237 slat (nsec): min=9599, max=49360, avg=10929.49, stdev=2546.99 00:11:33.237 clat (usec): min=240, max=667, avg=282.17, stdev=23.82 00:11:33.237 lat (usec): min=251, max=716, avg=293.10, stdev=25.00 00:11:33.237 clat percentiles (usec): 00:11:33.237 | 1.00th=[ 245], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:11:33.237 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:11:33.237 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:11:33.237 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 668], 99.95th=[ 668], 00:11:33.237 | 99.99th=[ 668] 00:11:33.237 bw ( KiB/s): min= 4087, max= 4087, per=36.21%, avg=4087.00, stdev= 0.00, samples=1 00:11:33.237 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:33.237 lat (usec) : 250=1.88%, 500=94.00%, 750=0.19% 00:11:33.237 lat (msec) : 50=3.94% 00:11:33.237 cpu : usr=0.58%, sys=0.19%, ctx=535, majf=0, minf=1 00:11:33.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.237 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.237 job1: (groupid=0, jobs=1): err= 0: pid=4027274: Thu Jul 25 11:58:10 2024 00:11:33.237 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:33.237 slat (nsec): min=4652, max=32138, avg=7974.29, stdev=2052.64 00:11:33.237 clat (usec): min=332, max=2086, avg=464.36, stdev=73.41 00:11:33.237 lat (usec): min=340, max=2093, avg=472.33, stdev=73.37 00:11:33.237 clat percentiles (usec): 00:11:33.237 | 1.00th=[ 347], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 424], 00:11:33.237 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 457], 60.00th=[ 465], 00:11:33.237 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 545], 00:11:33.237 | 99.00th=[ 660], 99.50th=[ 709], 99.90th=[ 840], 99.95th=[ 2089], 00:11:33.237 | 99.99th=[ 2089] 00:11:33.237 write: IOPS=1380, BW=5522KiB/s (5655kB/s)(5528KiB/1001msec); 0 zone resets 00:11:33.237 slat (nsec): min=10702, max=47939, avg=12561.73, stdev=2237.35 00:11:33.237 clat (usec): min=224, max=826, avg=353.12, stdev=98.24 00:11:33.237 lat (usec): min=236, max=873, avg=365.68, stdev=98.85 00:11:33.237 clat percentiles (usec): 00:11:33.237 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:11:33.237 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 347], 00:11:33.237 | 70.00th=[ 416], 80.00th=[ 469], 90.00th=[ 502], 95.00th=[ 515], 00:11:33.237 | 99.00th=[ 545], 99.50th=[ 668], 99.90th=[ 775], 99.95th=[ 824], 00:11:33.237 | 99.99th=[ 824] 00:11:33.237 bw ( KiB/s): min= 4335, max= 4335, per=38.40%, avg=4335.00, stdev= 0.00, samples=1 00:11:33.237 iops : min= 1083, max= 1083, avg=1083.00, stdev= 0.00, samples=1 00:11:33.237 lat (usec) : 250=9.89%, 500=77.64%, 750=12.26%, 1000=0.17% 00:11:33.237 lat (msec) : 4=0.04% 00:11:33.237 cpu : usr=1.40%, sys=4.60%, ctx=2408, majf=0, minf=1 00:11:33.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.237 issued rwts: total=1024,1382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.237 job2: (groupid=0, jobs=1): err= 0: pid=4027278: Thu Jul 25 11:58:10 2024 00:11:33.237 read: IOPS=28, BW=114KiB/s (117kB/s)(116KiB/1014msec) 00:11:33.237 slat (nsec): min=4716, max=28331, avg=19006.97, stdev=8137.23 00:11:33.237 clat (usec): min=439, max=41995, avg=29558.22, stdev=18559.35 00:11:33.237 lat (usec): min=445, max=42018, avg=29577.22, stdev=18565.73 00:11:33.237 clat percentiles (usec): 00:11:33.237 | 1.00th=[ 441], 5.00th=[ 457], 10.00th=[ 461], 20.00th=[ 465], 00:11:33.237 | 30.00th=[21627], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:11:33.237 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:33.237 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:33.237 | 99.99th=[42206] 00:11:33.237 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:11:33.237 slat (nsec): min=9890, max=33549, avg=11145.32, stdev=1922.65 00:11:33.237 clat (usec): min=211, max=789, avg=282.61, stdev=30.31 00:11:33.237 lat (usec): min=222, max=823, avg=293.76, stdev=31.38 00:11:33.237 clat percentiles (usec): 00:11:33.237 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 269], 00:11:33.237 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:11:33.237 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:11:33.237 | 99.00th=[ 338], 99.50th=[ 400], 99.90th=[ 791], 99.95th=[ 791], 00:11:33.237 | 99.99th=[ 791] 00:11:33.237 bw ( KiB/s): min= 4087, max= 4087, per=36.21%, avg=4087.00, stdev= 0.00, samples=1 00:11:33.237 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:33.237 lat (usec) : 250=2.03%, 500=93.35%, 750=0.37%, 1000=0.18% 00:11:33.237 lat (msec) : 2=0.18%, 50=3.88% 00:11:33.237 cpu : usr=0.30%, sys=0.59%, ctx=542, majf=0, minf=1 00:11:33.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.237 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.237 job3: (groupid=0, jobs=1): err= 0: pid=4027279: Thu Jul 25 11:58:10 2024 00:11:33.237 read: IOPS=18, BW=73.5KiB/s (75.3kB/s)(76.0KiB/1034msec) 00:11:33.237 slat (nsec): min=10356, max=29121, avg=22744.89, stdev=3491.61 00:11:33.237 clat (usec): min=40825, max=41810, avg=41094.71, stdev=281.87 00:11:33.238 lat (usec): min=40847, max=41833, avg=41117.45, stdev=281.89 00:11:33.238 clat percentiles (usec): 00:11:33.238 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:33.238 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:33.238 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:11:33.238 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:33.238 | 99.99th=[41681] 00:11:33.238 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:33.238 slat (nsec): min=11064, max=44795, avg=13041.46, stdev=2909.55 00:11:33.238 clat (usec): min=385, max=863, avg=468.45, stdev=55.24 00:11:33.238 lat (usec): min=397, max=893, avg=481.49, stdev=56.20 00:11:33.238 clat percentiles (usec): 00:11:33.238 | 1.00th=[ 396], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:11:33.238 | 30.00th=[ 424], 40.00th=[ 437], 50.00th=[ 482], 60.00th=[ 494], 00:11:33.238 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 515], 95.00th=[ 529], 00:11:33.238 | 99.00th=[ 676], 99.50th=[ 775], 99.90th=[ 865], 99.95th=[ 865], 00:11:33.238 | 99.99th=[ 865] 00:11:33.238 bw ( KiB/s): min= 4087, max= 4087, per=36.21%, avg=4087.00, stdev= 0.00, samples=1 00:11:33.238 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:33.238 lat (usec) : 500=70.43%, 750=25.42%, 1000=0.56% 00:11:33.238 lat (msec) : 50=3.58% 00:11:33.238 cpu : usr=0.39%, sys=1.06%, ctx=532, majf=0, minf=2 00:11:33.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.238 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.238 00:11:33.238 Run status group 0 (all jobs): 00:11:33.238 READ: bw=4228KiB/s (4330kB/s), 73.5KiB/s-4092KiB/s (75.3kB/s-4190kB/s), io=4372KiB (4477kB), run=1001-1034msec 00:11:33.238 WRITE: bw=11.0MiB/s (11.6MB/s), 1981KiB/s-5522KiB/s (2028kB/s-5655kB/s), io=11.4MiB (12.0MB), run=1001-1034msec 00:11:33.238 00:11:33.238 Disk stats (read/write): 00:11:33.238 nvme0n1: ios=41/512, merge=0/0, ticks=1624/139, in_queue=1763, util=95.49% 00:11:33.238 nvme0n2: ios=910/1024, merge=0/0, ticks=1332/388, in_queue=1720, util=100.00% 00:11:33.238 nvme0n3: ios=49/512, merge=0/0, ticks=1595/138, in_queue=1733, util=100.00% 00:11:33.238 nvme0n4: ios=72/512, merge=0/0, ticks=1457/228, in_queue=1685, util=100.00% 00:11:33.238 11:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:33.238 [global] 00:11:33.238 thread=1 00:11:33.238 invalidate=1 00:11:33.238 rw=randwrite 00:11:33.238 time_based=1 00:11:33.238 runtime=1 00:11:33.238 ioengine=libaio 00:11:33.238 direct=1 00:11:33.238 bs=4096 00:11:33.238 iodepth=1 00:11:33.238 norandommap=0 00:11:33.238 numjobs=1 00:11:33.238 00:11:33.238 verify_dump=1 00:11:33.238 verify_backlog=512 00:11:33.238 verify_state_save=0 00:11:33.238 do_verify=1 00:11:33.238 verify=crc32c-intel 00:11:33.238 [job0] 00:11:33.238 filename=/dev/nvme0n1 00:11:33.238 [job1] 00:11:33.238 filename=/dev/nvme0n2 00:11:33.238 [job2] 00:11:33.238 filename=/dev/nvme0n3 00:11:33.238 [job3] 00:11:33.238 filename=/dev/nvme0n4 00:11:33.238 Could not set queue depth (nvme0n1) 00:11:33.238 Could not set queue depth (nvme0n2) 00:11:33.238 Could not set queue depth (nvme0n3) 00:11:33.238 Could not set queue depth (nvme0n4) 00:11:33.496 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.496 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.496 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.496 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.496 fio-3.35 00:11:33.496 Starting 4 threads 00:11:34.873 00:11:34.873 job0: (groupid=0, jobs=1): err= 0: pid=4027699: Thu Jul 25 11:58:12 2024 00:11:34.873 read: IOPS=22, BW=91.5KiB/s (93.6kB/s)(92.0KiB/1006msec) 00:11:34.873 slat (nsec): min=7809, max=24068, avg=18742.26, stdev=5929.37 00:11:34.873 clat (usec): min=508, max=42313, avg=32337.06, stdev=17126.92 00:11:34.873 lat (usec): min=531, max=42322, avg=32355.80, stdev=17125.37 00:11:34.873 clat percentiles (usec): 00:11:34.873 | 1.00th=[ 510], 5.00th=[ 529], 10.00th=[ 537], 20.00th=[ 652], 00:11:34.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:34.873 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:11:34.873 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:34.873 | 99.99th=[42206] 00:11:34.873 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:11:34.873 slat (nsec): min=9592, max=42486, avg=11257.40, stdev=2931.59 00:11:34.873 clat (usec): min=402, max=761, avg=495.78, stdev=42.18 00:11:34.873 lat (usec): min=430, max=803, avg=507.04, stdev=42.53 00:11:34.873 clat percentiles (usec): 00:11:34.873 | 1.00th=[ 424], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 465], 00:11:34.873 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 502], 00:11:34.873 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 529], 95.00th=[ 553], 00:11:34.873 | 99.00th=[ 701], 99.50th=[ 717], 99.90th=[ 758], 99.95th=[ 758], 00:11:34.873 | 99.99th=[ 758] 00:11:34.874 bw ( KiB/s): min= 4096, max= 4096, per=33.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.874 lat (usec) : 500=56.07%, 750=40.37%, 1000=0.19% 00:11:34.874 lat (msec) : 50=3.36% 00:11:34.874 cpu : usr=0.40%, sys=0.50%, ctx=536, majf=0, minf=1 00:11:34.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.874 job1: (groupid=0, jobs=1): err= 0: pid=4027700: Thu Jul 25 11:58:12 2024 00:11:34.874 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:34.874 slat (nsec): min=7074, max=37562, avg=8128.67, stdev=1569.36 00:11:34.874 clat (usec): min=262, max=1059, avg=366.18, stdev=49.97 00:11:34.874 lat (usec): min=269, max=1066, avg=374.31, stdev=50.04 00:11:34.874 clat percentiles (usec): 00:11:34.874 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 338], 00:11:34.874 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:11:34.874 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 478], 00:11:34.874 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 889], 99.95th=[ 1057], 00:11:34.874 | 99.99th=[ 1057] 00:11:34.874 write: IOPS=1678, BW=6713KiB/s (6874kB/s)(6720KiB/1001msec); 0 zone resets 00:11:34.874 slat (nsec): min=10483, max=45858, avg=11837.30, stdev=2000.19 00:11:34.874 clat (usec): min=186, max=751, avg=235.17, stdev=27.66 00:11:34.874 lat (usec): min=205, max=786, avg=247.01, stdev=28.02 00:11:34.874 clat percentiles (usec): 00:11:34.874 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:11:34.874 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:11:34.874 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 281], 00:11:34.874 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 465], 99.95th=[ 750], 00:11:34.874 | 99.99th=[ 750] 00:11:34.874 bw ( KiB/s): min= 8192, max= 8192, per=66.23%, avg=8192.00, stdev= 0.00, samples=1 00:11:34.874 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:34.874 lat (usec) : 250=38.43%, 500=60.26%, 750=1.12%, 1000=0.16% 00:11:34.874 lat (msec) : 2=0.03% 00:11:34.874 cpu : usr=2.90%, sys=4.90%, ctx=3217, majf=0, minf=1 00:11:34.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 issued rwts: total=1536,1680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.874 job2: (groupid=0, jobs=1): err= 0: pid=4027701: Thu Jul 25 11:58:12 2024 00:11:34.874 read: IOPS=18, BW=73.1KiB/s (74.8kB/s)(76.0KiB/1040msec) 00:11:34.874 slat (nsec): min=10262, max=26074, avg=22112.21, stdev=3013.78 00:11:34.874 clat (usec): min=40870, max=41482, avg=40993.03, stdev=130.05 00:11:34.874 lat (usec): min=40891, max=41493, avg=41015.15, stdev=127.69 00:11:34.874 clat percentiles (usec): 00:11:34.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:34.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:34.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:34.874 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:34.874 | 99.99th=[41681] 00:11:34.874 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:11:34.874 slat (nsec): min=10970, max=50012, avg=12393.99, stdev=2507.36 00:11:34.874 clat (usec): min=419, max=746, avg=493.49, stdev=38.28 00:11:34.874 lat (usec): min=431, max=758, avg=505.88, stdev=38.21 00:11:34.874 clat percentiles (usec): 00:11:34.874 | 1.00th=[ 424], 5.00th=[ 441], 10.00th=[ 449], 20.00th=[ 465], 00:11:34.874 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 502], 00:11:34.874 | 70.00th=[ 506], 80.00th=[ 519], 90.00th=[ 529], 95.00th=[ 545], 00:11:34.874 | 99.00th=[ 627], 99.50th=[ 717], 99.90th=[ 750], 99.95th=[ 750], 00:11:34.874 | 99.99th=[ 750] 00:11:34.874 bw ( KiB/s): min= 4096, max= 4096, per=33.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.874 lat (usec) : 500=57.06%, 750=39.36% 00:11:34.874 lat (msec) : 50=3.58% 00:11:34.874 cpu : usr=0.38%, sys=0.96%, ctx=532, majf=0, minf=1 00:11:34.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.874 job3: (groupid=0, jobs=1): err= 0: pid=4027702: Thu Jul 25 11:58:12 2024 00:11:34.874 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:11:34.874 slat (nsec): min=9259, max=24506, avg=20907.18, stdev=4557.50 00:11:34.874 clat (usec): min=583, max=41648, avg=37372.42, stdev=11887.94 00:11:34.874 lat (usec): min=605, max=41658, avg=37393.33, stdev=11887.60 00:11:34.874 clat percentiles (usec): 00:11:34.874 | 1.00th=[ 586], 5.00th=[ 709], 10.00th=[40633], 20.00th=[41157], 00:11:34.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:34.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:34.874 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:34.874 | 99.99th=[41681] 00:11:34.874 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:11:34.874 slat (nsec): min=10869, max=51261, avg=12390.34, stdev=2534.17 00:11:34.874 clat (usec): min=334, max=568, avg=391.26, stdev=32.67 00:11:34.874 lat (usec): min=347, max=579, avg=403.65, stdev=32.61 00:11:34.874 clat percentiles (usec): 00:11:34.874 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 367], 00:11:34.874 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 392], 00:11:34.874 | 70.00th=[ 400], 80.00th=[ 408], 90.00th=[ 424], 95.00th=[ 449], 00:11:34.874 | 99.00th=[ 519], 99.50th=[ 553], 99.90th=[ 570], 99.95th=[ 570], 00:11:34.874 | 99.99th=[ 570] 00:11:34.874 bw ( KiB/s): min= 4096, max= 4096, per=33.11%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.874 lat (usec) : 500=93.63%, 750=2.62% 00:11:34.874 lat (msec) : 50=3.75% 00:11:34.874 cpu : usr=0.10%, sys=1.26%, ctx=535, majf=0, minf=1 00:11:34.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.874 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.874 00:11:34.874 Run status group 0 (all jobs): 00:11:34.874 READ: bw=6154KiB/s (6302kB/s), 73.1KiB/s-6138KiB/s (74.8kB/s-6285kB/s), io=6400KiB (6554kB), run=1001-1040msec 00:11:34.874 WRITE: bw=12.1MiB/s (12.7MB/s), 1969KiB/s-6713KiB/s (2016kB/s-6874kB/s), io=12.6MiB (13.2MB), run=1001-1040msec 00:11:34.874 00:11:34.874 Disk stats (read/write): 00:11:34.874 nvme0n1: ios=53/512, merge=0/0, ticks=1943/250, in_queue=2193, util=97.51% 00:11:34.874 nvme0n2: ios=1049/1436, merge=0/0, ticks=1355/329, in_queue=1684, util=97.18% 00:11:34.874 nvme0n3: ios=75/512, merge=0/0, ticks=1457/252, in_queue=1709, util=97.79% 00:11:34.874 nvme0n4: ios=55/512, merge=0/0, ticks=1421/185, in_queue=1606, util=97.46% 00:11:34.874 11:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:34.874 [global] 00:11:34.874 thread=1 00:11:34.874 invalidate=1 00:11:34.874 rw=write 00:11:34.874 time_based=1 00:11:34.874 runtime=1 00:11:34.874 ioengine=libaio 00:11:34.874 direct=1 00:11:34.874 bs=4096 00:11:34.874 iodepth=128 00:11:34.874 norandommap=0 00:11:34.874 numjobs=1 00:11:34.874 00:11:34.874 verify_dump=1 00:11:34.874 verify_backlog=512 00:11:34.874 verify_state_save=0 00:11:34.874 do_verify=1 00:11:34.874 verify=crc32c-intel 00:11:34.874 [job0] 00:11:34.874 filename=/dev/nvme0n1 00:11:34.874 [job1] 00:11:34.874 filename=/dev/nvme0n2 00:11:34.874 [job2] 00:11:34.874 filename=/dev/nvme0n3 00:11:34.874 [job3] 00:11:34.874 filename=/dev/nvme0n4 00:11:35.151 Could not set queue depth (nvme0n1) 00:11:35.151 Could not set queue depth (nvme0n2) 00:11:35.151 Could not set queue depth (nvme0n3) 00:11:35.151 Could not set queue depth (nvme0n4) 00:11:35.409 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.409 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.409 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.409 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.409 fio-3.35 00:11:35.409 Starting 4 threads 00:11:36.834 00:11:36.834 job0: (groupid=0, jobs=1): err= 0: pid=4028121: Thu Jul 25 11:58:13 2024 00:11:36.834 read: IOPS=1994, BW=7977KiB/s (8168kB/s)(8192KiB/1027msec) 00:11:36.834 slat (nsec): min=1547, max=25149k, avg=235613.23, stdev=1715838.14 00:11:36.834 clat (usec): min=4051, max=83961, avg=28309.97, stdev=14308.37 00:11:36.834 lat (usec): min=4059, max=83966, avg=28545.58, stdev=14461.88 00:11:36.834 clat percentiles (usec): 00:11:36.834 | 1.00th=[ 4146], 5.00th=[11600], 10.00th=[13304], 20.00th=[14615], 00:11:36.834 | 30.00th=[19792], 40.00th=[21365], 50.00th=[25297], 60.00th=[32637], 00:11:36.834 | 70.00th=[37487], 80.00th=[38011], 90.00th=[40633], 95.00th=[54789], 00:11:36.834 | 99.00th=[78119], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:11:36.834 | 99.99th=[84411] 00:11:36.834 write: IOPS=2115, BW=8463KiB/s (8667kB/s)(8692KiB/1027msec); 0 zone resets 00:11:36.834 slat (usec): min=3, max=33701, avg=234.83, stdev=1324.37 00:11:36.834 clat (usec): min=5972, max=83963, avg=32949.92, stdev=16063.45 00:11:36.834 lat (usec): min=5979, max=83979, avg=33184.75, stdev=16170.04 00:11:36.834 clat percentiles (usec): 00:11:36.834 | 1.00th=[ 8717], 5.00th=[12911], 10.00th=[14091], 20.00th=[14615], 00:11:36.834 | 30.00th=[19268], 40.00th=[24511], 50.00th=[33424], 60.00th=[39584], 00:11:36.834 | 70.00th=[46400], 80.00th=[51119], 90.00th=[51119], 95.00th=[53216], 00:11:36.834 | 99.00th=[66323], 99.50th=[67634], 99.90th=[72877], 99.95th=[84411], 00:11:36.834 | 99.99th=[84411] 00:11:36.834 bw ( KiB/s): min= 4608, max=11824, per=24.54%, avg=8216.00, stdev=5102.48, samples=2 00:11:36.834 iops : min= 1152, max= 2956, avg=2054.00, stdev=1275.62, samples=2 00:11:36.834 lat (msec) : 10=2.13%, 20=31.01%, 50=53.49%, 100=13.36% 00:11:36.834 cpu : usr=1.66%, sys=2.24%, ctx=217, majf=0, minf=1 00:11:36.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:36.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.834 issued rwts: total=2048,2173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.834 job1: (groupid=0, jobs=1): err= 0: pid=4028122: Thu Jul 25 11:58:13 2024 00:11:36.834 read: IOPS=1149, BW=4599KiB/s (4709kB/s)(4852KiB/1055msec) 00:11:36.834 slat (usec): min=3, max=43102, avg=425.44, stdev=3219.18 00:11:36.834 clat (msec): min=19, max=142, avg=52.50, stdev=26.82 00:11:36.834 lat (msec): min=19, max=145, avg=52.93, stdev=27.11 00:11:36.834 clat percentiles (msec): 00:11:36.834 | 1.00th=[ 20], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 32], 00:11:36.834 | 30.00th=[ 35], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 55], 00:11:36.834 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 83], 95.00th=[ 108], 00:11:36.834 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:11:36.834 | 99.99th=[ 142] 00:11:36.834 write: IOPS=1455, BW=5824KiB/s (5963kB/s)(6144KiB/1055msec); 0 zone resets 00:11:36.834 slat (usec): min=5, max=20529, avg=311.60, stdev=1411.75 00:11:36.834 clat (usec): min=1124, max=142230, avg=45646.02, stdev=25544.38 00:11:36.834 lat (usec): min=1135, max=145276, avg=45957.63, stdev=25657.37 00:11:36.834 clat percentiles (msec): 00:11:36.834 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 20], 00:11:36.834 | 30.00th=[ 28], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 49], 00:11:36.834 | 70.00th=[ 52], 80.00th=[ 52], 90.00th=[ 88], 95.00th=[ 104], 00:11:36.834 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 142], 99.95th=[ 142], 00:11:36.834 | 99.99th=[ 142] 00:11:36.834 bw ( KiB/s): min= 4096, max= 8192, per=18.35%, avg=6144.00, stdev=2896.31, samples=2 00:11:36.834 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:36.834 lat (msec) : 2=0.07%, 20=13.86%, 50=47.65%, 100=30.67%, 250=7.75% 00:11:36.834 cpu : usr=1.61%, sys=1.71%, ctx=157, majf=0, minf=1 00:11:36.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:11:36.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.834 issued rwts: total=1213,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.834 job2: (groupid=0, jobs=1): err= 0: pid=4028123: Thu Jul 25 11:58:13 2024 00:11:36.834 read: IOPS=2055, BW=8221KiB/s (8418kB/s)(8344KiB/1015msec) 00:11:36.834 slat (usec): min=2, max=13881, avg=138.74, stdev=976.76 00:11:36.834 clat (usec): min=5749, max=29528, avg=16770.34, stdev=3840.49 00:11:36.834 lat (usec): min=5755, max=29532, avg=16909.08, stdev=3906.92 00:11:36.834 clat percentiles (usec): 00:11:36.834 | 1.00th=[ 6718], 5.00th=[11863], 10.00th=[14746], 20.00th=[15139], 00:11:36.834 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15664], 60.00th=[16319], 00:11:36.834 | 70.00th=[16450], 80.00th=[17957], 90.00th=[22676], 95.00th=[24773], 00:11:36.834 | 99.00th=[28443], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:11:36.834 | 99.99th=[29492] 00:11:36.834 write: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec); 0 zone resets 00:11:36.834 slat (usec): min=3, max=106475, avg=271.40, stdev=3617.46 00:11:36.834 clat (usec): min=1946, max=413141, avg=21747.78, stdev=30270.31 00:11:36.834 lat (msec): min=3, max=413, avg=22.02, stdev=31.24 00:11:36.834 clat percentiles (msec): 00:11:36.834 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 13], 00:11:36.834 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 17], 00:11:36.834 | 70.00th=[ 17], 80.00th=[ 17], 90.00th=[ 31], 95.00th=[ 53], 00:11:36.834 | 99.00th=[ 171], 99.50th=[ 255], 99.90th=[ 414], 99.95th=[ 414], 00:11:36.834 | 99.99th=[ 414] 00:11:36.834 bw ( KiB/s): min= 3616, max=16152, per=29.53%, avg=9884.00, stdev=8864.29, samples=2 00:11:36.834 iops : min= 904, max= 4038, avg=2471.00, stdev=2216.07, samples=2 00:11:36.834 lat (msec) : 2=0.02%, 4=0.26%, 10=5.62%, 20=77.85%, 50=10.76% 00:11:36.834 lat (msec) : 100=4.11%, 250=1.03%, 500=0.34% 00:11:36.834 cpu : usr=2.37%, sys=2.96%, ctx=306, majf=0, minf=1 00:11:36.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:36.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.834 issued rwts: total=2086,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.834 job3: (groupid=0, jobs=1): err= 0: pid=4028124: Thu Jul 25 11:58:13 2024 00:11:36.834 read: IOPS=2196, BW=8787KiB/s (8997kB/s)(8892KiB/1012msec) 00:11:36.834 slat (usec): min=2, max=20951, avg=163.63, stdev=1290.99 00:11:36.834 clat (usec): min=10239, max=52084, avg=21188.46, stdev=6634.60 00:11:36.834 lat (usec): min=10251, max=52112, avg=21352.10, stdev=6741.75 00:11:36.834 clat percentiles (usec): 00:11:36.835 | 1.00th=[11338], 5.00th=[13304], 10.00th=[15664], 20.00th=[16909], 00:11:36.835 | 30.00th=[17957], 40.00th=[18744], 50.00th=[19006], 60.00th=[19530], 00:11:36.835 | 70.00th=[21365], 80.00th=[25822], 90.00th=[30802], 95.00th=[35390], 00:11:36.835 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:11:36.835 | 99.99th=[52167] 00:11:36.835 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:11:36.835 slat (usec): min=3, max=88576, avg=241.82, stdev=2969.41 00:11:36.835 clat (msec): min=5, max=315, avg=20.05, stdev=22.31 00:11:36.835 lat (msec): min=5, max=315, avg=20.29, stdev=23.06 00:11:36.835 clat percentiles (msec): 00:11:36.835 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 13], 00:11:36.835 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 19], 00:11:36.835 | 70.00th=[ 20], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 29], 00:11:36.835 | 99.00th=[ 134], 99.50th=[ 207], 99.90th=[ 317], 99.95th=[ 317], 00:11:36.835 | 99.99th=[ 317] 00:11:36.835 bw ( KiB/s): min= 7648, max=12832, per=30.59%, avg=10240.00, stdev=3665.64, samples=2 00:11:36.835 iops : min= 1912, max= 3208, avg=2560.00, stdev=916.41, samples=2 00:11:36.835 lat (msec) : 10=2.95%, 20=69.33%, 50=26.36%, 100=0.69%, 250=0.50% 00:11:36.835 lat (msec) : 500=0.17% 00:11:36.835 cpu : usr=2.47%, sys=3.26%, ctx=170, majf=0, minf=1 00:11:36.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:36.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.835 issued rwts: total=2223,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.835 00:11:36.835 Run status group 0 (all jobs): 00:11:36.835 READ: bw=28.0MiB/s (29.4MB/s), 4599KiB/s-8787KiB/s (4709kB/s-8997kB/s), io=29.6MiB (31.0MB), run=1012-1055msec 00:11:36.835 WRITE: bw=32.7MiB/s (34.3MB/s), 5824KiB/s-9.88MiB/s (5963kB/s-10.4MB/s), io=34.5MiB (36.2MB), run=1012-1055msec 00:11:36.835 00:11:36.835 Disk stats (read/write): 00:11:36.835 nvme0n1: ios=1586/1951, merge=0/0, ticks=34414/55957, in_queue=90371, util=88.88% 00:11:36.835 nvme0n2: ios=1048/1415, merge=0/0, ticks=38486/48479, in_queue=86965, util=99.59% 00:11:36.835 nvme0n3: ios=1572/1591, merge=0/0, ticks=25341/21012, in_queue=46353, util=100.00% 00:11:36.835 nvme0n4: ios=1573/1975, merge=0/0, ticks=33194/30404, in_queue=63598, util=95.77% 00:11:36.835 11:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:36.835 [global] 00:11:36.835 thread=1 00:11:36.835 invalidate=1 00:11:36.835 rw=randwrite 00:11:36.835 time_based=1 00:11:36.835 runtime=1 00:11:36.835 ioengine=libaio 00:11:36.835 direct=1 00:11:36.835 bs=4096 00:11:36.835 iodepth=128 00:11:36.835 norandommap=0 00:11:36.835 numjobs=1 00:11:36.835 00:11:36.835 verify_dump=1 00:11:36.835 verify_backlog=512 00:11:36.835 verify_state_save=0 00:11:36.835 do_verify=1 00:11:36.835 verify=crc32c-intel 00:11:36.835 [job0] 00:11:36.835 filename=/dev/nvme0n1 00:11:36.835 [job1] 00:11:36.835 filename=/dev/nvme0n2 00:11:36.835 [job2] 00:11:36.835 filename=/dev/nvme0n3 00:11:36.835 [job3] 00:11:36.835 filename=/dev/nvme0n4 00:11:36.835 Could not set queue depth (nvme0n1) 00:11:36.835 Could not set queue depth (nvme0n2) 00:11:36.835 Could not set queue depth (nvme0n3) 00:11:36.835 Could not set queue depth (nvme0n4) 00:11:37.095 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.095 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.095 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.095 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.095 fio-3.35 00:11:37.095 Starting 4 threads 00:11:38.502 00:11:38.502 job0: (groupid=0, jobs=1): err= 0: pid=4028545: Thu Jul 25 11:58:15 2024 00:11:38.502 read: IOPS=2085, BW=8344KiB/s (8544kB/s)(8836KiB/1059msec) 00:11:38.502 slat (nsec): min=1006, max=37063k, avg=247629.43, stdev=2013832.28 00:11:38.502 clat (usec): min=6024, max=90021, avg=32569.77, stdev=16313.04 00:11:38.502 lat (msec): min=6, max=117, avg=32.82, stdev=16.46 00:11:38.502 clat percentiles (usec): 00:11:38.502 | 1.00th=[ 8160], 5.00th=[10945], 10.00th=[16909], 20.00th=[19530], 00:11:38.502 | 30.00th=[22938], 40.00th=[23462], 50.00th=[25560], 60.00th=[34866], 00:11:38.502 | 70.00th=[40633], 80.00th=[45876], 90.00th=[50594], 95.00th=[58983], 00:11:38.502 | 99.00th=[89654], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:11:38.502 | 99.99th=[89654] 00:11:38.502 write: IOPS=2417, BW=9669KiB/s (9902kB/s)(10.0MiB/1059msec); 0 zone resets 00:11:38.502 slat (usec): min=3, max=36185, avg=169.96, stdev=1578.80 00:11:38.502 clat (usec): min=1616, max=74382, avg=24269.50, stdev=13993.84 00:11:38.502 lat (usec): min=1629, max=74413, avg=24439.46, stdev=14100.48 00:11:38.502 clat percentiles (usec): 00:11:38.502 | 1.00th=[ 3687], 5.00th=[ 7832], 10.00th=[11207], 20.00th=[15270], 00:11:38.502 | 30.00th=[16057], 40.00th=[16319], 50.00th=[18744], 60.00th=[20055], 00:11:38.502 | 70.00th=[27132], 80.00th=[37487], 90.00th=[47449], 95.00th=[53740], 00:11:38.502 | 99.00th=[64750], 99.50th=[65274], 99.90th=[69731], 99.95th=[73925], 00:11:38.502 | 99.99th=[73925] 00:11:38.502 bw ( KiB/s): min= 8376, max=12104, per=27.18%, avg=10240.00, stdev=2636.09, samples=2 00:11:38.502 iops : min= 2094, max= 3026, avg=2560.00, stdev=659.02, samples=2 00:11:38.502 lat (msec) : 2=0.06%, 4=0.50%, 10=6.27%, 20=34.89%, 50=48.67% 00:11:38.502 lat (msec) : 100=9.60% 00:11:38.502 cpu : usr=1.80%, sys=3.02%, ctx=191, majf=0, minf=1 00:11:38.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:38.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.502 issued rwts: total=2209,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.502 job1: (groupid=0, jobs=1): err= 0: pid=4028546: Thu Jul 25 11:58:15 2024 00:11:38.502 read: IOPS=2217, BW=8872KiB/s (9085kB/s)(8916KiB/1005msec) 00:11:38.502 slat (nsec): min=1969, max=33773k, avg=217209.63, stdev=1666489.92 00:11:38.502 clat (usec): min=2389, max=61008, avg=26303.80, stdev=9453.50 00:11:38.502 lat (usec): min=6020, max=61034, avg=26521.01, stdev=9568.39 00:11:38.502 clat percentiles (usec): 00:11:38.502 | 1.00th=[ 6259], 5.00th=[11338], 10.00th=[13566], 20.00th=[17171], 00:11:38.502 | 30.00th=[24249], 40.00th=[25035], 50.00th=[26084], 60.00th=[26346], 00:11:38.502 | 70.00th=[28443], 80.00th=[33162], 90.00th=[40109], 95.00th=[43254], 00:11:38.502 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[56361], 00:11:38.502 | 99.99th=[61080] 00:11:38.502 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:11:38.502 slat (nsec): min=1974, max=36017k, avg=165349.30, stdev=1488354.84 00:11:38.503 clat (usec): min=1128, max=71140, avg=26831.89, stdev=11611.16 00:11:38.503 lat (usec): min=1138, max=71151, avg=26997.24, stdev=11667.56 00:11:38.503 clat percentiles (usec): 00:11:38.503 | 1.00th=[ 8979], 5.00th=[13435], 10.00th=[14222], 20.00th=[17695], 00:11:38.503 | 30.00th=[21103], 40.00th=[23200], 50.00th=[24511], 60.00th=[27657], 00:11:38.503 | 70.00th=[30016], 80.00th=[33424], 90.00th=[39584], 95.00th=[50070], 00:11:38.503 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:11:38.503 | 99.99th=[70779] 00:11:38.503 bw ( KiB/s): min= 9880, max=10600, per=27.18%, avg=10240.00, stdev=509.12, samples=2 00:11:38.503 iops : min= 2470, max= 2650, avg=2560.00, stdev=127.28, samples=2 00:11:38.503 lat (msec) : 2=0.13%, 4=0.02%, 10=2.23%, 20=22.45%, 50=71.46% 00:11:38.503 lat (msec) : 100=3.72% 00:11:38.503 cpu : usr=1.79%, sys=3.39%, ctx=235, majf=0, minf=1 00:11:38.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:38.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.503 issued rwts: total=2229,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.503 job2: (groupid=0, jobs=1): err= 0: pid=4028547: Thu Jul 25 11:58:15 2024 00:11:38.503 read: IOPS=1963, BW=7853KiB/s (8042kB/s)(7908KiB/1007msec) 00:11:38.503 slat (usec): min=2, max=13644, avg=259.43, stdev=1416.04 00:11:38.503 clat (usec): min=2719, max=44400, avg=30671.49, stdev=4728.73 00:11:38.503 lat (usec): min=7576, max=44428, avg=30930.92, stdev=4835.02 00:11:38.503 clat percentiles (usec): 00:11:38.503 | 1.00th=[17957], 5.00th=[22152], 10.00th=[25297], 20.00th=[28443], 00:11:38.503 | 30.00th=[29492], 40.00th=[30016], 50.00th=[31065], 60.00th=[31851], 00:11:38.503 | 70.00th=[32375], 80.00th=[32900], 90.00th=[36439], 95.00th=[38536], 00:11:38.503 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[44303], 00:11:38.503 | 99.99th=[44303] 00:11:38.503 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:11:38.503 slat (usec): min=3, max=16022, avg=231.75, stdev=1049.51 00:11:38.503 clat (usec): min=16272, max=47389, avg=32142.41, stdev=4864.00 00:11:38.503 lat (usec): min=16283, max=47420, avg=32374.16, stdev=4952.11 00:11:38.503 clat percentiles (usec): 00:11:38.503 | 1.00th=[19268], 5.00th=[24773], 10.00th=[28443], 20.00th=[29492], 00:11:38.503 | 30.00th=[30278], 40.00th=[30540], 50.00th=[31065], 60.00th=[32113], 00:11:38.503 | 70.00th=[32900], 80.00th=[34341], 90.00th=[38536], 95.00th=[44303], 00:11:38.503 | 99.00th=[46400], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:11:38.503 | 99.99th=[47449] 00:11:38.503 bw ( KiB/s): min= 8192, max= 8192, per=21.74%, avg=8192.00, stdev= 0.00, samples=2 00:11:38.503 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:38.503 lat (msec) : 4=0.02%, 10=0.20%, 20=2.16%, 50=97.61% 00:11:38.503 cpu : usr=2.09%, sys=3.08%, ctx=269, majf=0, minf=1 00:11:38.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:38.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.503 issued rwts: total=1977,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.503 job3: (groupid=0, jobs=1): err= 0: pid=4028548: Thu Jul 25 11:58:15 2024 00:11:38.503 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:11:38.503 slat (nsec): min=1877, max=39168k, avg=156379.73, stdev=1546955.55 00:11:38.503 clat (usec): min=2142, max=78995, avg=26048.70, stdev=12613.41 00:11:38.503 lat (usec): min=2163, max=79043, avg=26205.08, stdev=12731.84 00:11:38.503 clat percentiles (usec): 00:11:38.503 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 7111], 20.00th=[14877], 00:11:38.503 | 30.00th=[19530], 40.00th=[22414], 50.00th=[26346], 60.00th=[27919], 00:11:38.503 | 70.00th=[33817], 80.00th=[40109], 90.00th=[44303], 95.00th=[46400], 00:11:38.503 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54264], 99.95th=[71828], 00:11:38.503 | 99.99th=[79168] 00:11:38.503 write: IOPS=2762, BW=10.8MiB/s (11.3MB/s)(11.0MiB/1016msec); 0 zone resets 00:11:38.503 slat (nsec): min=1934, max=32345k, avg=148721.17, stdev=1380034.13 00:11:38.503 clat (usec): min=1036, max=66484, avg=22173.78, stdev=10322.29 00:11:38.503 lat (usec): min=1193, max=66496, avg=22322.50, stdev=10412.87 00:11:38.503 clat percentiles (usec): 00:11:38.503 | 1.00th=[ 6587], 5.00th=[ 8455], 10.00th=[10683], 20.00th=[14091], 00:11:38.503 | 30.00th=[17433], 40.00th=[18482], 50.00th=[20841], 60.00th=[21627], 00:11:38.503 | 70.00th=[25035], 80.00th=[28181], 90.00th=[34341], 95.00th=[43779], 00:11:38.503 | 99.00th=[57410], 99.50th=[60556], 99.90th=[62129], 99.95th=[66323], 00:11:38.503 | 99.99th=[66323] 00:11:38.503 bw ( KiB/s): min= 9560, max=11872, per=28.44%, avg=10716.00, stdev=1634.83, samples=2 00:11:38.503 iops : min= 2390, max= 2968, avg=2679.00, stdev=408.71, samples=2 00:11:38.503 lat (msec) : 2=0.02%, 4=1.71%, 10=9.54%, 20=28.64%, 50=57.56% 00:11:38.503 lat (msec) : 100=2.53% 00:11:38.503 cpu : usr=1.28%, sys=4.24%, ctx=212, majf=0, minf=1 00:11:38.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:38.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.503 issued rwts: total=2560,2807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.503 00:11:38.503 Run status group 0 (all jobs): 00:11:38.503 READ: bw=33.1MiB/s (34.7MB/s), 7853KiB/s-9.84MiB/s (8042kB/s-10.3MB/s), io=35.1MiB (36.8MB), run=1005-1059msec 00:11:38.503 WRITE: bw=36.8MiB/s (38.6MB/s), 8135KiB/s-10.8MiB/s (8330kB/s-11.3MB/s), io=39.0MiB (40.9MB), run=1005-1059msec 00:11:38.503 00:11:38.503 Disk stats (read/write): 00:11:38.503 nvme0n1: ios=1969/2048, merge=0/0, ticks=48196/36632, in_queue=84828, util=99.30% 00:11:38.503 nvme0n2: ios=1581/2001, merge=0/0, ticks=45304/54201, in_queue=99505, util=92.96% 00:11:38.503 nvme0n3: ios=1554/1839, merge=0/0, ticks=24886/26954, in_queue=51840, util=99.79% 00:11:38.503 nvme0n4: ios=2077/2174, merge=0/0, ticks=43300/39997, in_queue=83297, util=96.03% 00:11:38.503 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:38.503 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4028808 00:11:38.503 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:38.503 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:38.503 [global] 00:11:38.503 thread=1 00:11:38.503 invalidate=1 00:11:38.503 rw=read 00:11:38.503 time_based=1 00:11:38.503 runtime=10 00:11:38.503 ioengine=libaio 00:11:38.503 direct=1 00:11:38.503 bs=4096 00:11:38.503 iodepth=1 00:11:38.503 norandommap=1 00:11:38.503 numjobs=1 00:11:38.503 00:11:38.503 [job0] 00:11:38.503 filename=/dev/nvme0n1 00:11:38.503 [job1] 00:11:38.503 filename=/dev/nvme0n2 00:11:38.503 [job2] 00:11:38.503 filename=/dev/nvme0n3 00:11:38.503 [job3] 00:11:38.503 filename=/dev/nvme0n4 00:11:38.503 Could not set queue depth (nvme0n1) 00:11:38.503 Could not set queue depth (nvme0n2) 00:11:38.503 Could not set queue depth (nvme0n3) 00:11:38.503 Could not set queue depth (nvme0n4) 00:11:38.771 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.771 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.771 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.771 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.771 fio-3.35 00:11:38.771 Starting 4 threads 00:11:41.300 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:41.558 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6287360, buflen=4096 00:11:41.558 fio: pid=4028976, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:41.558 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:41.817 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.817 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:41.817 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5775360, buflen=4096 00:11:41.817 fio: pid=4028975, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:42.075 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=23281664, buflen=4096 00:11:42.075 fio: pid=4028969, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:42.075 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.075 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:42.333 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.333 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:42.333 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=29814784, buflen=4096 00:11:42.333 fio: pid=4028970, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:42.333 00:11:42.333 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4028969: Thu Jul 25 11:58:19 2024 00:11:42.333 read: IOPS=1783, BW=7134KiB/s (7305kB/s)(22.2MiB/3187msec) 00:11:42.333 slat (usec): min=6, max=15775, avg=13.89, stdev=255.81 00:11:42.333 clat (usec): min=333, max=1094, avg=541.38, stdev=97.16 00:11:42.333 lat (usec): min=340, max=16651, avg=555.27, stdev=279.28 00:11:42.333 clat percentiles (usec): 00:11:42.333 | 1.00th=[ 355], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 474], 00:11:42.333 | 30.00th=[ 494], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 594], 00:11:42.333 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 652], 95.00th=[ 668], 00:11:42.333 | 99.00th=[ 725], 99.50th=[ 783], 99.90th=[ 947], 99.95th=[ 1012], 00:11:42.333 | 99.99th=[ 1090] 00:11:42.333 bw ( KiB/s): min= 6336, max= 9048, per=39.27%, avg=7167.33, stdev=971.62, samples=6 00:11:42.333 iops : min= 1584, max= 2262, avg=1791.83, stdev=242.90, samples=6 00:11:42.333 lat (usec) : 500=34.64%, 750=64.63%, 1000=0.67% 00:11:42.333 lat (msec) : 2=0.05% 00:11:42.333 cpu : usr=0.66%, sys=2.01%, ctx=5687, majf=0, minf=1 00:11:42.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.333 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.333 issued rwts: total=5685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.334 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4028970: Thu Jul 25 11:58:19 2024 00:11:42.334 read: IOPS=2087, BW=8350KiB/s (8550kB/s)(28.4MiB/3487msec) 00:11:42.334 slat (usec): min=4, max=33949, avg=11.90, stdev=397.87 00:11:42.334 clat (usec): min=271, max=47769, avg=463.05, stdev=1678.69 00:11:42.334 lat (usec): min=279, max=75125, avg=474.95, stdev=1834.29 00:11:42.334 clat percentiles (usec): 00:11:42.334 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 359], 00:11:42.334 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 392], 00:11:42.334 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 482], 95.00th=[ 506], 00:11:42.334 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:11:42.334 | 99.99th=[47973] 00:11:42.334 bw ( KiB/s): min= 7702, max=10544, per=52.00%, avg=9489.00, stdev=1124.71, samples=6 00:11:42.334 iops : min= 1925, max= 2636, avg=2372.17, stdev=281.34, samples=6 00:11:42.334 lat (usec) : 500=93.71%, 750=6.10%, 1000=0.01% 00:11:42.334 lat (msec) : 50=0.16% 00:11:42.334 cpu : usr=0.63%, sys=1.81%, ctx=7286, majf=0, minf=1 00:11:42.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.334 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.334 issued rwts: total=7280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.334 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4028975: Thu Jul 25 11:58:19 2024 00:11:42.334 read: IOPS=476, BW=1903KiB/s (1949kB/s)(5640KiB/2963msec) 00:11:42.334 slat (nsec): min=8316, max=43887, avg=10068.56, stdev=2972.74 00:11:42.334 clat (usec): min=365, max=41606, avg=2074.28, stdev=7774.86 00:11:42.334 lat (usec): min=375, max=41616, avg=2084.34, stdev=7776.87 00:11:42.334 clat percentiles (usec): 00:11:42.334 | 1.00th=[ 408], 5.00th=[ 433], 10.00th=[ 453], 20.00th=[ 469], 00:11:42.334 | 30.00th=[ 482], 40.00th=[ 494], 50.00th=[ 502], 60.00th=[ 515], 00:11:42.334 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 652], 95.00th=[ 693], 00:11:42.334 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:42.334 | 99.99th=[41681] 00:11:42.334 bw ( KiB/s): min= 104, max= 6928, per=12.26%, avg=2238.40, stdev=2854.81, samples=5 00:11:42.334 iops : min= 26, max= 1732, avg=559.60, stdev=713.70, samples=5 00:11:42.334 lat (usec) : 500=49.40%, 750=46.28%, 1000=0.35% 00:11:42.334 lat (msec) : 2=0.07%, 50=3.83% 00:11:42.334 cpu : usr=0.24%, sys=0.94%, ctx=1411, majf=0, minf=1 00:11:42.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.334 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.334 issued rwts: total=1411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.334 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4028976: Thu Jul 25 11:58:19 2024 00:11:42.334 read: IOPS=575, BW=2299KiB/s (2354kB/s)(6140KiB/2671msec) 00:11:42.334 slat (nsec): min=7429, max=32230, avg=8631.73, stdev=2241.53 00:11:42.334 clat (usec): min=505, max=42446, avg=1715.72, stdev=6566.12 00:11:42.334 lat (usec): min=515, max=42454, avg=1724.35, stdev=6567.18 00:11:42.334 clat percentiles (usec): 00:11:42.334 | 1.00th=[ 578], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 603], 00:11:42.334 | 30.00th=[ 611], 40.00th=[ 611], 50.00th=[ 619], 60.00th=[ 627], 00:11:42.334 | 70.00th=[ 644], 80.00th=[ 660], 90.00th=[ 685], 95.00th=[ 717], 00:11:42.334 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:42.334 | 99.99th=[42206] 00:11:42.334 bw ( KiB/s): min= 96, max= 6344, per=11.42%, avg=2084.80, stdev=2861.15, samples=5 00:11:42.334 iops : min= 24, max= 1586, avg=521.20, stdev=715.29, samples=5 00:11:42.334 lat (usec) : 750=96.29%, 1000=0.98% 00:11:42.334 lat (msec) : 50=2.67% 00:11:42.334 cpu : usr=0.49%, sys=0.30%, ctx=1536, majf=0, minf=2 00:11:42.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.334 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.334 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.334 00:11:42.334 Run status group 0 (all jobs): 00:11:42.334 READ: bw=17.8MiB/s (18.7MB/s), 1903KiB/s-8350KiB/s (1949kB/s-8550kB/s), io=62.1MiB (65.2MB), run=2671-3487msec 00:11:42.334 00:11:42.334 Disk stats (read/write): 00:11:42.334 nvme0n1: ios=5515/0, merge=0/0, ticks=2955/0, in_queue=2955, util=94.21% 00:11:42.334 nvme0n2: ios=7300/0, merge=0/0, ticks=3724/0, in_queue=3724, util=98.25% 00:11:42.334 nvme0n3: ios=1407/0, merge=0/0, ticks=2784/0, in_queue=2784, util=96.36% 00:11:42.334 nvme0n4: ios=1428/0, merge=0/0, ticks=2557/0, in_queue=2557, util=96.38% 00:11:42.593 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.593 11:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:42.851 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.851 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:43.109 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.109 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:43.367 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.367 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:43.626 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:43.626 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4028808 00:11:43.626 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:43.626 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.626 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.884 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:43.885 nvmf hotplug test: fio failed as expected 00:11:43.885 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.885 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.885 rmmod nvme_tcp 00:11:43.885 rmmod nvme_fabrics 00:11:44.143 rmmod nvme_keyring 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4025495 ']' 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4025495 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 4025495 ']' 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 4025495 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4025495 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4025495' 00:11:44.143 killing process with pid 4025495 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 4025495 00:11:44.143 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 4025495 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.402 11:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:46.322 00:11:46.322 real 0m28.761s 00:11:46.322 user 2m25.422s 00:11:46.322 sys 0m8.633s 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 ************************************ 00:11:46.322 END TEST nvmf_fio_target 00:11:46.322 ************************************ 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 ************************************ 00:11:46.322 START TEST nvmf_bdevio 00:11:46.322 ************************************ 00:11:46.322 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:46.584 * Looking for test storage... 00:11:46.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.584 11:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:53.156 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:53.157 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:53.157 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:53.157 Found net devices under 0000:af:00.0: cvl_0_0 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:53.157 Found net devices under 0000:af:00.1: cvl_0_1 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:53.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:11:53.157 00:11:53.157 --- 10.0.0.2 ping statistics --- 00:11:53.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.157 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:11:53.157 00:11:53.157 --- 10.0.0.1 ping statistics --- 00:11:53.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.157 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4033645 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4033645 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 4033645 ']' 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.157 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.157 [2024-07-25 11:58:29.722747] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:11:53.157 [2024-07-25 11:58:29.722809] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.157 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.157 [2024-07-25 11:58:29.843177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.157 [2024-07-25 11:58:29.991582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.157 [2024-07-25 11:58:29.991666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.157 [2024-07-25 11:58:29.991688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.157 [2024-07-25 11:58:29.991706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.157 [2024-07-25 11:58:29.991723] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.157 [2024-07-25 11:58:29.991816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:53.157 [2024-07-25 11:58:29.991930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:53.157 [2024-07-25 11:58:29.992069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:53.157 [2024-07-25 11:58:29.992074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.415 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.673 [2024-07-25 11:58:30.723303] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.674 Malloc0 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.674 [2024-07-25 11:58:30.779537] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:53.674 { 00:11:53.674 "params": { 00:11:53.674 "name": "Nvme$subsystem", 00:11:53.674 "trtype": "$TEST_TRANSPORT", 00:11:53.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:53.674 "adrfam": "ipv4", 00:11:53.674 "trsvcid": "$NVMF_PORT", 00:11:53.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:53.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:53.674 "hdgst": ${hdgst:-false}, 00:11:53.674 "ddgst": ${ddgst:-false} 00:11:53.674 }, 00:11:53.674 "method": "bdev_nvme_attach_controller" 00:11:53.674 } 00:11:53.674 EOF 00:11:53.674 )") 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:53.674 11:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:53.674 "params": { 00:11:53.674 "name": "Nvme1", 00:11:53.674 "trtype": "tcp", 00:11:53.674 "traddr": "10.0.0.2", 00:11:53.674 "adrfam": "ipv4", 00:11:53.674 "trsvcid": "4420", 00:11:53.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:53.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:53.674 "hdgst": false, 00:11:53.674 "ddgst": false 00:11:53.674 }, 00:11:53.674 "method": "bdev_nvme_attach_controller" 00:11:53.674 }' 00:11:53.674 [2024-07-25 11:58:30.833183] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:11:53.674 [2024-07-25 11:58:30.833245] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033793 ] 00:11:53.674 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.674 [2024-07-25 11:58:30.914887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:53.932 [2024-07-25 11:58:31.006325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.932 [2024-07-25 11:58:31.006440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.932 [2024-07-25 11:58:31.006440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.190 I/O targets: 00:11:54.190 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:54.190 00:11:54.190 00:11:54.190 CUnit - A unit testing framework for C - Version 2.1-3 00:11:54.190 http://cunit.sourceforge.net/ 00:11:54.190 00:11:54.190 00:11:54.190 Suite: bdevio tests on: Nvme1n1 00:11:54.190 Test: blockdev write read block ...passed 00:11:54.190 Test: blockdev write zeroes read block ...passed 00:11:54.190 Test: blockdev write zeroes read no split ...passed 00:11:54.190 Test: blockdev write zeroes read split ...passed 00:11:54.506 Test: blockdev write zeroes read split partial ...passed 00:11:54.506 Test: blockdev reset ...[2024-07-25 11:58:31.501978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:54.506 [2024-07-25 11:58:31.502067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2288c80 (9): Bad file descriptor 00:11:54.506 [2024-07-25 11:58:31.532482] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:54.506 passed 00:11:54.506 Test: blockdev write read 8 blocks ...passed 00:11:54.506 Test: blockdev write read size > 128k ...passed 00:11:54.506 Test: blockdev write read invalid size ...passed 00:11:54.506 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:54.506 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:54.506 Test: blockdev write read max offset ...passed 00:11:54.506 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:54.506 Test: blockdev writev readv 8 blocks ...passed 00:11:54.506 Test: blockdev writev readv 30 x 1block ...passed 00:11:54.506 Test: blockdev writev readv block ...passed 00:11:54.506 Test: blockdev writev readv size > 128k ...passed 00:11:54.506 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:54.506 Test: blockdev comparev and writev ...[2024-07-25 11:58:31.753223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.753287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:54.506 [2024-07-25 11:58:31.753329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.753352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:54.506 [2024-07-25 11:58:31.754180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.754213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:54.506 [2024-07-25 11:58:31.754249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.754271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:54.506 [2024-07-25 11:58:31.755076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.755107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:54.506 [2024-07-25 11:58:31.755143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.755165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:54.506 [2024-07-25 11:58:31.755966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.755996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:54.506 [2024-07-25 11:58:31.756033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.506 [2024-07-25 11:58:31.756054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:54.506 passed 00:11:54.765 Test: blockdev nvme passthru rw ...passed 00:11:54.765 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:58:31.840202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:54.765 [2024-07-25 11:58:31.840242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:54.765 [2024-07-25 11:58:31.840629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:54.765 [2024-07-25 11:58:31.840659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:54.765 [2024-07-25 11:58:31.840975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:54.765 [2024-07-25 11:58:31.841004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:54.765 [2024-07-25 11:58:31.841311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:54.765 [2024-07-25 11:58:31.841341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:54.765 passed 00:11:54.765 Test: blockdev nvme admin passthru ...passed 00:11:54.765 Test: blockdev copy ...passed 00:11:54.765 00:11:54.765 Run Summary: Type Total Ran Passed Failed Inactive 00:11:54.765 suites 1 1 n/a 0 0 00:11:54.765 tests 23 23 23 0 0 00:11:54.765 asserts 152 152 152 0 n/a 00:11:54.765 00:11:54.765 Elapsed time = 1.198 seconds 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.024 rmmod nvme_tcp 00:11:55.024 rmmod nvme_fabrics 00:11:55.024 rmmod nvme_keyring 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4033645 ']' 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4033645 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 4033645 ']' 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 4033645 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4033645 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4033645' 00:11:55.024 killing process with pid 4033645 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 4033645 00:11:55.024 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 4033645 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.284 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.837 00:11:57.837 real 0m10.983s 00:11:57.837 user 0m14.081s 00:11:57.837 sys 0m5.158s 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:57.837 ************************************ 00:11:57.837 END TEST nvmf_bdevio 00:11:57.837 ************************************ 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:57.837 00:11:57.837 real 4m59.853s 00:11:57.837 user 12m17.978s 00:11:57.837 sys 1m38.235s 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:57.837 ************************************ 00:11:57.837 END TEST nvmf_target_core 00:11:57.837 ************************************ 00:11:57.837 11:58:34 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:57.837 11:58:34 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.837 11:58:34 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.837 11:58:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:57.837 ************************************ 00:11:57.837 START TEST nvmf_target_extra 00:11:57.837 ************************************ 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:57.837 * Looking for test storage... 00:11:57.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.837 ************************************ 00:11:57.837 START TEST nvmf_example 00:11:57.837 ************************************ 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:57.837 * Looking for test storage... 00:11:57.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.837 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.838 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:04.405 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:04.405 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:04.405 Found net devices under 0000:af:00.0: cvl_0_0 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.405 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:04.406 Found net devices under 0000:af:00.1: cvl_0_1 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:04.406 00:12:04.406 --- 10.0.0.2 ping statistics --- 00:12:04.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.406 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:12:04.406 00:12:04.406 --- 10.0.0.1 ping statistics --- 00:12:04.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.406 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4037824 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4037824 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 4037824 ']' 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.406 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.406 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.664 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.922 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.922 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.922 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.922 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.922 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.922 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:04.922 11:58:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:04.922 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.119 Initializing NVMe Controllers 00:12:17.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:17.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:17.119 Initialization complete. Launching workers. 00:12:17.119 ======================================================== 00:12:17.119 Latency(us) 00:12:17.119 Device Information : IOPS MiB/s Average min max 00:12:17.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10923.20 42.67 5861.29 1034.23 20218.86 00:12:17.119 ======================================================== 00:12:17.119 Total : 10923.20 42.67 5861.29 1034.23 20218.86 00:12:17.119 00:12:17.119 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:17.119 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:17.119 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:17.119 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:17.119 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.120 rmmod nvme_tcp 00:12:17.120 rmmod nvme_fabrics 00:12:17.120 rmmod nvme_keyring 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4037824 ']' 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4037824 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 4037824 ']' 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 4037824 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4037824 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4037824' 00:12:17.120 killing process with pid 4037824 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 4037824 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 4037824 00:12:17.120 nvmf threads initialize successfully 00:12:17.120 bdev subsystem init successfully 00:12:17.120 created a nvmf target service 00:12:17.120 create targets's poll groups done 00:12:17.120 all subsystems of target started 00:12:17.120 nvmf target is running 00:12:17.120 all subsystems of target stopped 00:12:17.120 destroy targets's poll groups done 00:12:17.120 destroyed the nvmf target service 00:12:17.120 bdev subsystem finish successfully 00:12:17.120 nvmf threads destroy successfully 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.120 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 00:12:17.690 real 0m19.895s 00:12:17.690 user 0m47.012s 00:12:17.690 sys 0m5.803s 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 ************************************ 00:12:17.690 END TEST nvmf_example 00:12:17.690 ************************************ 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.690 ************************************ 00:12:17.690 START TEST nvmf_filesystem 00:12:17.690 ************************************ 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:17.690 * Looking for test storage... 00:12:17.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:17.690 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:17.691 #define SPDK_CONFIG_H 00:12:17.691 #define SPDK_CONFIG_APPS 1 00:12:17.691 #define SPDK_CONFIG_ARCH native 00:12:17.691 #undef SPDK_CONFIG_ASAN 00:12:17.691 #undef SPDK_CONFIG_AVAHI 00:12:17.691 #undef SPDK_CONFIG_CET 00:12:17.691 #define SPDK_CONFIG_COVERAGE 1 00:12:17.691 #define SPDK_CONFIG_CROSS_PREFIX 00:12:17.691 #undef SPDK_CONFIG_CRYPTO 00:12:17.691 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:17.691 #undef SPDK_CONFIG_CUSTOMOCF 00:12:17.691 #undef SPDK_CONFIG_DAOS 00:12:17.691 #define SPDK_CONFIG_DAOS_DIR 00:12:17.691 #define SPDK_CONFIG_DEBUG 1 00:12:17.691 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:17.691 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:17.691 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:17.691 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:17.691 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:17.691 #undef SPDK_CONFIG_DPDK_UADK 00:12:17.691 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:17.691 #define SPDK_CONFIG_EXAMPLES 1 00:12:17.691 #undef SPDK_CONFIG_FC 00:12:17.691 #define SPDK_CONFIG_FC_PATH 00:12:17.691 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:17.691 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:17.691 #undef SPDK_CONFIG_FUSE 00:12:17.691 #undef SPDK_CONFIG_FUZZER 00:12:17.691 #define SPDK_CONFIG_FUZZER_LIB 00:12:17.691 #undef SPDK_CONFIG_GOLANG 00:12:17.691 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:17.691 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:17.691 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:17.691 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:17.691 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:17.691 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:17.691 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:17.691 #define SPDK_CONFIG_IDXD 1 00:12:17.691 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:17.691 #undef SPDK_CONFIG_IPSEC_MB 00:12:17.691 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:17.691 #define SPDK_CONFIG_ISAL 1 00:12:17.691 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:17.691 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:17.691 #define SPDK_CONFIG_LIBDIR 00:12:17.691 #undef SPDK_CONFIG_LTO 00:12:17.691 #define SPDK_CONFIG_MAX_LCORES 128 00:12:17.691 #define SPDK_CONFIG_NVME_CUSE 1 00:12:17.691 #undef SPDK_CONFIG_OCF 00:12:17.691 #define SPDK_CONFIG_OCF_PATH 00:12:17.691 #define SPDK_CONFIG_OPENSSL_PATH 00:12:17.691 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:17.691 #define SPDK_CONFIG_PGO_DIR 00:12:17.691 #undef SPDK_CONFIG_PGO_USE 00:12:17.691 #define SPDK_CONFIG_PREFIX /usr/local 00:12:17.691 #undef SPDK_CONFIG_RAID5F 00:12:17.691 #undef SPDK_CONFIG_RBD 00:12:17.691 #define SPDK_CONFIG_RDMA 1 00:12:17.691 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:17.691 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:17.691 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:17.691 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:17.691 #define SPDK_CONFIG_SHARED 1 00:12:17.691 #undef SPDK_CONFIG_SMA 00:12:17.691 #define SPDK_CONFIG_TESTS 1 00:12:17.691 #undef SPDK_CONFIG_TSAN 00:12:17.691 #define SPDK_CONFIG_UBLK 1 00:12:17.691 #define SPDK_CONFIG_UBSAN 1 00:12:17.691 #undef SPDK_CONFIG_UNIT_TESTS 00:12:17.691 #undef SPDK_CONFIG_URING 00:12:17.691 #define SPDK_CONFIG_URING_PATH 00:12:17.691 #undef SPDK_CONFIG_URING_ZNS 00:12:17.691 #undef SPDK_CONFIG_USDT 00:12:17.691 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:17.691 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:17.691 #define SPDK_CONFIG_VFIO_USER 1 00:12:17.691 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:17.691 #define SPDK_CONFIG_VHOST 1 00:12:17.691 #define SPDK_CONFIG_VIRTIO 1 00:12:17.691 #undef SPDK_CONFIG_VTUNE 00:12:17.691 #define SPDK_CONFIG_VTUNE_DIR 00:12:17.691 #define SPDK_CONFIG_WERROR 1 00:12:17.691 #define SPDK_CONFIG_WPDK_DIR 00:12:17.691 #undef SPDK_CONFIG_XNVME 00:12:17.691 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.691 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:17.692 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:17.955 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:17.956 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 4040447 ]] 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 4040447 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:17.956 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.sWicjc 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.sWicjc/tests/target /tmp/spdk.sWicjc 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=954339328 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330090496 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=83689820160 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=94501478400 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10811658240 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=47188557824 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=47250739200 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=18877210624 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=18900295680 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23085056 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=47249264640 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=47250739200 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1474560 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=9450143744 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=9450147840 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:17.957 * Looking for test storage... 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=83689820160 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=13026250752 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:17.957 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.958 11:58:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.529 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.529 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.529 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:24.530 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:24.530 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:24.530 Found net devices under 0000:af:00.0: cvl_0_0 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:24.530 Found net devices under 0000:af:00.1: cvl_0_1 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:24.530 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.530 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.530 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.530 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:12:24.530 00:12:24.530 --- 10.0.0.2 ping statistics --- 00:12:24.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.530 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:24.530 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:12:24.530 00:12:24.530 --- 10.0.0.1 ping statistics --- 00:12:24.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.530 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:12:24.530 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 ************************************ 00:12:24.531 START TEST nvmf_filesystem_no_in_capsule 00:12:24.531 ************************************ 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4043693 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4043693 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 4043693 ']' 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 [2024-07-25 11:59:01.197463] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:12:24.531 [2024-07-25 11:59:01.197528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.531 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.531 [2024-07-25 11:59:01.290371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.531 [2024-07-25 11:59:01.387187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.531 [2024-07-25 11:59:01.387227] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.531 [2024-07-25 11:59:01.387238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.531 [2024-07-25 11:59:01.387247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.531 [2024-07-25 11:59:01.387255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.531 [2024-07-25 11:59:01.387310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.531 [2024-07-25 11:59:01.387345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.531 [2024-07-25 11:59:01.387457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.531 [2024-07-25 11:59:01.387458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 [2024-07-25 11:59:01.550421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 Malloc1 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 [2024-07-25 11:59:01.704035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.531 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:24.531 { 00:12:24.531 "name": "Malloc1", 00:12:24.531 "aliases": [ 00:12:24.531 "c050fce7-58e4-45b4-ae58-0c6c40aac38b" 00:12:24.531 ], 00:12:24.531 "product_name": "Malloc disk", 00:12:24.531 "block_size": 512, 00:12:24.531 "num_blocks": 1048576, 00:12:24.531 "uuid": "c050fce7-58e4-45b4-ae58-0c6c40aac38b", 00:12:24.531 "assigned_rate_limits": { 00:12:24.531 "rw_ios_per_sec": 0, 00:12:24.531 "rw_mbytes_per_sec": 0, 00:12:24.531 "r_mbytes_per_sec": 0, 00:12:24.531 "w_mbytes_per_sec": 0 00:12:24.531 }, 00:12:24.531 "claimed": true, 00:12:24.531 "claim_type": "exclusive_write", 00:12:24.531 "zoned": false, 00:12:24.531 "supported_io_types": { 00:12:24.531 "read": true, 00:12:24.531 "write": true, 00:12:24.531 "unmap": true, 00:12:24.531 "flush": true, 00:12:24.531 "reset": true, 00:12:24.531 "nvme_admin": false, 00:12:24.531 "nvme_io": false, 00:12:24.531 "nvme_io_md": false, 00:12:24.531 "write_zeroes": true, 00:12:24.531 "zcopy": true, 00:12:24.531 "get_zone_info": false, 00:12:24.532 "zone_management": false, 00:12:24.532 "zone_append": false, 00:12:24.532 "compare": false, 00:12:24.532 "compare_and_write": false, 00:12:24.532 "abort": true, 00:12:24.532 "seek_hole": false, 00:12:24.532 "seek_data": false, 00:12:24.532 "copy": true, 00:12:24.532 "nvme_iov_md": false 00:12:24.532 }, 00:12:24.532 "memory_domains": [ 00:12:24.532 { 00:12:24.532 "dma_device_id": "system", 00:12:24.532 "dma_device_type": 1 00:12:24.532 }, 00:12:24.532 { 00:12:24.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.532 "dma_device_type": 2 00:12:24.532 } 00:12:24.532 ], 00:12:24.532 "driver_specific": {} 00:12:24.532 } 00:12:24.532 ]' 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:24.532 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.909 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.909 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:25.909 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.909 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:25.909 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:28.449 11:59:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:29.423 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.359 ************************************ 00:12:30.359 START TEST filesystem_ext4 00:12:30.359 ************************************ 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:30.359 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:30.359 mke2fs 1.46.5 (30-Dec-2021) 00:12:30.359 Discarding device blocks: 0/522240 done 00:12:30.359 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:30.359 Filesystem UUID: 033e2d69-f96a-4d9b-a549-0688793def51 00:12:30.359 Superblock backups stored on blocks: 00:12:30.359 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:30.359 00:12:30.359 Allocating group tables: 0/64 done 00:12:30.359 Writing inode tables: 0/64 done 00:12:30.618 Creating journal (8192 blocks): done 00:12:31.445 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:31.445 00:12:31.445 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:31.445 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.012 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4043693 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.272 00:12:32.272 real 0m2.006s 00:12:32.272 user 0m0.036s 00:12:32.272 sys 0m0.057s 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 ************************************ 00:12:32.272 END TEST filesystem_ext4 00:12:32.272 ************************************ 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.272 ************************************ 00:12:32.272 START TEST filesystem_btrfs 00:12:32.272 ************************************ 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:32.272 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:32.532 btrfs-progs v6.6.2 00:12:32.532 See https://btrfs.readthedocs.io for more information. 00:12:32.532 00:12:32.532 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:32.532 NOTE: several default settings have changed in version 5.15, please make sure 00:12:32.532 this does not affect your deployments: 00:12:32.532 - DUP for metadata (-m dup) 00:12:32.532 - enabled no-holes (-O no-holes) 00:12:32.532 - enabled free-space-tree (-R free-space-tree) 00:12:32.532 00:12:32.532 Label: (null) 00:12:32.532 UUID: a24da537-aa73-4340-891c-27db88e9c8b0 00:12:32.532 Node size: 16384 00:12:32.532 Sector size: 4096 00:12:32.532 Filesystem size: 510.00MiB 00:12:32.532 Block group profiles: 00:12:32.532 Data: single 8.00MiB 00:12:32.532 Metadata: DUP 32.00MiB 00:12:32.532 System: DUP 8.00MiB 00:12:32.532 SSD detected: yes 00:12:32.532 Zoned device: no 00:12:32.532 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:32.532 Runtime features: free-space-tree 00:12:32.532 Checksum: crc32c 00:12:32.532 Number of devices: 1 00:12:32.532 Devices: 00:12:32.532 ID SIZE PATH 00:12:32.532 1 510.00MiB /dev/nvme0n1p1 00:12:32.532 00:12:32.532 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:32.532 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4043693 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.468 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.468 00:12:33.468 real 0m1.281s 00:12:33.468 user 0m0.026s 00:12:33.468 sys 0m0.127s 00:12:33.469 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.469 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.469 ************************************ 00:12:33.469 END TEST filesystem_btrfs 00:12:33.469 ************************************ 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.728 ************************************ 00:12:33.728 START TEST filesystem_xfs 00:12:33.728 ************************************ 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:33.728 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:33.728 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:33.728 = sectsz=512 attr=2, projid32bit=1 00:12:33.728 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:33.728 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:33.728 data = bsize=4096 blocks=130560, imaxpct=25 00:12:33.728 = sunit=0 swidth=0 blks 00:12:33.728 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:33.728 log =internal log bsize=4096 blocks=16384, version=2 00:12:33.728 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:33.728 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:34.664 Discarding blocks...Done. 00:12:34.664 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:34.664 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.200 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4043693 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.200 00:12:37.200 real 0m3.249s 00:12:37.200 user 0m0.024s 00:12:37.200 sys 0m0.075s 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:37.200 ************************************ 00:12:37.200 END TEST filesystem_xfs 00:12:37.200 ************************************ 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4043693 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 4043693 ']' 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 4043693 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4043693 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4043693' 00:12:37.200 killing process with pid 4043693 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 4043693 00:12:37.200 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 4043693 00:12:37.460 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.460 00:12:37.460 real 0m13.574s 00:12:37.460 user 0m53.033s 00:12:37.460 sys 0m1.313s 00:12:37.460 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.460 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.460 ************************************ 00:12:37.460 END TEST nvmf_filesystem_no_in_capsule 00:12:37.460 ************************************ 00:12:37.460 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:37.460 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:37.460 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.460 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.719 ************************************ 00:12:37.719 START TEST nvmf_filesystem_in_capsule 00:12:37.719 ************************************ 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4046282 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4046282 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 4046282 ']' 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.719 11:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.719 [2024-07-25 11:59:14.845512] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:12:37.719 [2024-07-25 11:59:14.845565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.719 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.719 [2024-07-25 11:59:14.931105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.978 [2024-07-25 11:59:15.023938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.978 [2024-07-25 11:59:15.023978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.978 [2024-07-25 11:59:15.023989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.978 [2024-07-25 11:59:15.023998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.978 [2024-07-25 11:59:15.024006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.978 [2024-07-25 11:59:15.024050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.978 [2024-07-25 11:59:15.024163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.978 [2024-07-25 11:59:15.024300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.978 [2024-07-25 11:59:15.024300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.544 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.544 [2024-07-25 11:59:15.834160] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.802 Malloc1 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.802 [2024-07-25 11:59:15.996046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.802 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:38.802 { 00:12:38.802 "name": "Malloc1", 00:12:38.802 "aliases": [ 00:12:38.802 "7c299d4d-e5e0-4f69-b96a-525c83f9bb50" 00:12:38.802 ], 00:12:38.802 "product_name": "Malloc disk", 00:12:38.802 "block_size": 512, 00:12:38.802 "num_blocks": 1048576, 00:12:38.802 "uuid": "7c299d4d-e5e0-4f69-b96a-525c83f9bb50", 00:12:38.802 "assigned_rate_limits": { 00:12:38.802 "rw_ios_per_sec": 0, 00:12:38.802 "rw_mbytes_per_sec": 0, 00:12:38.802 "r_mbytes_per_sec": 0, 00:12:38.802 "w_mbytes_per_sec": 0 00:12:38.802 }, 00:12:38.802 "claimed": true, 00:12:38.802 "claim_type": "exclusive_write", 00:12:38.802 "zoned": false, 00:12:38.802 "supported_io_types": { 00:12:38.802 "read": true, 00:12:38.802 "write": true, 00:12:38.802 "unmap": true, 00:12:38.802 "flush": true, 00:12:38.802 "reset": true, 00:12:38.802 "nvme_admin": false, 00:12:38.802 "nvme_io": false, 00:12:38.802 "nvme_io_md": false, 00:12:38.802 "write_zeroes": true, 00:12:38.802 "zcopy": true, 00:12:38.802 "get_zone_info": false, 00:12:38.802 "zone_management": false, 00:12:38.802 "zone_append": false, 00:12:38.802 "compare": false, 00:12:38.802 "compare_and_write": false, 00:12:38.802 "abort": true, 00:12:38.802 "seek_hole": false, 00:12:38.802 "seek_data": false, 00:12:38.802 "copy": true, 00:12:38.802 "nvme_iov_md": false 00:12:38.802 }, 00:12:38.802 "memory_domains": [ 00:12:38.802 { 00:12:38.802 "dma_device_id": "system", 00:12:38.802 "dma_device_type": 1 00:12:38.802 }, 00:12:38.802 { 00:12:38.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.802 "dma_device_type": 2 00:12:38.802 } 00:12:38.802 ], 00:12:38.802 "driver_specific": {} 00:12:38.802 } 00:12:38.802 ]' 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:38.802 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:39.060 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:39.060 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:39.060 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:39.060 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:39.061 11:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.440 11:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.440 11:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.440 11:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.440 11:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:40.440 11:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:42.349 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:42.608 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:42.608 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.542 ************************************ 00:12:43.542 START TEST filesystem_in_capsule_ext4 00:12:43.542 ************************************ 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:43.542 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:43.542 mke2fs 1.46.5 (30-Dec-2021) 00:12:43.801 Discarding device blocks: 0/522240 done 00:12:43.801 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:43.801 Filesystem UUID: 33883d45-586e-4179-90c4-a1894959aae4 00:12:43.801 Superblock backups stored on blocks: 00:12:43.801 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:43.801 00:12:43.801 Allocating group tables: 0/64 done 00:12:43.801 Writing inode tables: 0/64 done 00:12:43.801 Creating journal (8192 blocks): done 00:12:44.886 Writing superblocks and filesystem accounting information: 0/64 done 00:12:44.886 00:12:44.886 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:44.886 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4046282 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:45.145 00:12:45.145 real 0m1.574s 00:12:45.145 user 0m0.028s 00:12:45.145 sys 0m0.064s 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:45.145 ************************************ 00:12:45.145 END TEST filesystem_in_capsule_ext4 00:12:45.145 ************************************ 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.145 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.405 ************************************ 00:12:45.405 START TEST filesystem_in_capsule_btrfs 00:12:45.405 ************************************ 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:45.405 btrfs-progs v6.6.2 00:12:45.405 See https://btrfs.readthedocs.io for more information. 00:12:45.405 00:12:45.405 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:45.405 NOTE: several default settings have changed in version 5.15, please make sure 00:12:45.405 this does not affect your deployments: 00:12:45.405 - DUP for metadata (-m dup) 00:12:45.405 - enabled no-holes (-O no-holes) 00:12:45.405 - enabled free-space-tree (-R free-space-tree) 00:12:45.405 00:12:45.405 Label: (null) 00:12:45.405 UUID: b980c88a-4866-489b-b335-4bdef277f47b 00:12:45.405 Node size: 16384 00:12:45.405 Sector size: 4096 00:12:45.405 Filesystem size: 510.00MiB 00:12:45.405 Block group profiles: 00:12:45.405 Data: single 8.00MiB 00:12:45.405 Metadata: DUP 32.00MiB 00:12:45.405 System: DUP 8.00MiB 00:12:45.405 SSD detected: yes 00:12:45.405 Zoned device: no 00:12:45.405 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:45.405 Runtime features: free-space-tree 00:12:45.405 Checksum: crc32c 00:12:45.405 Number of devices: 1 00:12:45.405 Devices: 00:12:45.405 ID SIZE PATH 00:12:45.405 1 510.00MiB /dev/nvme0n1p1 00:12:45.405 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:45.405 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:45.665 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:45.665 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:45.924 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:45.924 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:45.924 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:45.924 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:45.924 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4046282 00:12:45.924 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:45.925 00:12:45.925 real 0m0.565s 00:12:45.925 user 0m0.028s 00:12:45.925 sys 0m0.125s 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:45.925 ************************************ 00:12:45.925 END TEST filesystem_in_capsule_btrfs 00:12:45.925 ************************************ 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.925 ************************************ 00:12:45.925 START TEST filesystem_in_capsule_xfs 00:12:45.925 ************************************ 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:45.925 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:45.925 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:45.925 = sectsz=512 attr=2, projid32bit=1 00:12:45.925 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:45.925 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:45.925 data = bsize=4096 blocks=130560, imaxpct=25 00:12:45.925 = sunit=0 swidth=0 blks 00:12:45.925 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:45.925 log =internal log bsize=4096 blocks=16384, version=2 00:12:45.925 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:45.925 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:47.302 Discarding blocks...Done. 00:12:47.302 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:47.302 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4046282 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:49.838 00:12:49.838 real 0m3.706s 00:12:49.838 user 0m0.022s 00:12:49.838 sys 0m0.076s 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:49.838 ************************************ 00:12:49.838 END TEST filesystem_in_capsule_xfs 00:12:49.838 ************************************ 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:49.838 11:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4046282 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 4046282 ']' 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 4046282 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4046282 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4046282' 00:12:49.838 killing process with pid 4046282 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 4046282 00:12:49.838 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 4046282 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:50.407 00:12:50.407 real 0m12.702s 00:12:50.407 user 0m49.685s 00:12:50.407 sys 0m1.349s 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.407 ************************************ 00:12:50.407 END TEST nvmf_filesystem_in_capsule 00:12:50.407 ************************************ 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.407 rmmod nvme_tcp 00:12:50.407 rmmod nvme_fabrics 00:12:50.407 rmmod nvme_keyring 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.407 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.953 00:12:52.953 real 0m34.839s 00:12:52.953 user 1m44.605s 00:12:52.953 sys 0m7.327s 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:52.953 ************************************ 00:12:52.953 END TEST nvmf_filesystem 00:12:52.953 ************************************ 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.953 ************************************ 00:12:52.953 START TEST nvmf_target_discovery 00:12:52.953 ************************************ 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:52.953 * Looking for test storage... 00:12:52.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.953 11:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:58.229 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.229 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:58.229 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:58.229 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:58.229 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:58.229 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:58.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:58.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:58.230 Found net devices under 0000:af:00.0: cvl_0_0 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:58.230 Found net devices under 0000:af:00.1: cvl_0_1 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.230 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:58.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:12:58.490 00:12:58.490 --- 10.0.0.2 ping statistics --- 00:12:58.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.490 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:12:58.490 00:12:58.490 --- 10.0.0.1 ping statistics --- 00:12:58.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.490 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:58.490 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4052308 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4052308 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 4052308 ']' 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.491 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:58.750 [2024-07-25 11:59:35.828252] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:12:58.750 [2024-07-25 11:59:35.828313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.750 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.750 [2024-07-25 11:59:35.915962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.750 [2024-07-25 11:59:36.012391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.750 [2024-07-25 11:59:36.012434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.750 [2024-07-25 11:59:36.012444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.750 [2024-07-25 11:59:36.012457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.750 [2024-07-25 11:59:36.012465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.750 [2024-07-25 11:59:36.012513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.750 [2024-07-25 11:59:36.012646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.750 [2024-07-25 11:59:36.012694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.750 [2024-07-25 11:59:36.012693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 [2024-07-25 11:59:36.825333] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 Null1 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 [2024-07-25 11:59:36.877692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 Null2 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.688 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 Null3 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 Null4 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.689 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.949 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.949 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:59.949 00:12:59.949 Discovery Log Number of Records 6, Generation counter 6 00:12:59.949 =====Discovery Log Entry 0====== 00:12:59.949 trtype: tcp 00:12:59.949 adrfam: ipv4 00:12:59.949 subtype: current discovery subsystem 00:12:59.949 treq: not required 00:12:59.949 portid: 0 00:12:59.949 trsvcid: 4420 00:12:59.949 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:59.949 traddr: 10.0.0.2 00:12:59.949 eflags: explicit discovery connections, duplicate discovery information 00:12:59.949 sectype: none 00:12:59.949 =====Discovery Log Entry 1====== 00:12:59.949 trtype: tcp 00:12:59.949 adrfam: ipv4 00:12:59.949 subtype: nvme subsystem 00:12:59.949 treq: not required 00:12:59.949 portid: 0 00:12:59.949 trsvcid: 4420 00:12:59.949 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:59.949 traddr: 10.0.0.2 00:12:59.949 eflags: none 00:12:59.949 sectype: none 00:12:59.949 =====Discovery Log Entry 2====== 00:12:59.949 trtype: tcp 00:12:59.949 adrfam: ipv4 00:12:59.949 subtype: nvme subsystem 00:12:59.949 treq: not required 00:12:59.949 portid: 0 00:12:59.949 trsvcid: 4420 00:12:59.949 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:59.949 traddr: 10.0.0.2 00:12:59.949 eflags: none 00:12:59.949 sectype: none 00:12:59.949 =====Discovery Log Entry 3====== 00:12:59.949 trtype: tcp 00:12:59.949 adrfam: ipv4 00:12:59.949 subtype: nvme subsystem 00:12:59.949 treq: not required 00:12:59.949 portid: 0 00:12:59.949 trsvcid: 4420 00:12:59.949 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:59.949 traddr: 10.0.0.2 00:12:59.949 eflags: none 00:12:59.949 sectype: none 00:12:59.949 =====Discovery Log Entry 4====== 00:12:59.949 trtype: tcp 00:12:59.949 adrfam: ipv4 00:12:59.949 subtype: nvme subsystem 00:12:59.949 treq: not required 00:12:59.949 portid: 0 00:12:59.949 trsvcid: 4420 00:12:59.949 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:59.949 traddr: 10.0.0.2 00:12:59.949 eflags: none 00:12:59.949 sectype: none 00:12:59.949 =====Discovery Log Entry 5====== 00:12:59.949 trtype: tcp 00:12:59.949 adrfam: ipv4 00:12:59.949 subtype: discovery subsystem referral 00:12:59.949 treq: not required 00:12:59.949 portid: 0 00:12:59.949 trsvcid: 4430 00:12:59.949 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:59.949 traddr: 10.0.0.2 00:12:59.949 eflags: none 00:12:59.949 sectype: none 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:59.949 Perform nvmf subsystem discovery via RPC 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.949 [ 00:12:59.949 { 00:12:59.949 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:59.949 "subtype": "Discovery", 00:12:59.949 "listen_addresses": [ 00:12:59.949 { 00:12:59.949 "trtype": "TCP", 00:12:59.949 "adrfam": "IPv4", 00:12:59.949 "traddr": "10.0.0.2", 00:12:59.949 "trsvcid": "4420" 00:12:59.949 } 00:12:59.949 ], 00:12:59.949 "allow_any_host": true, 00:12:59.949 "hosts": [] 00:12:59.949 }, 00:12:59.949 { 00:12:59.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.949 "subtype": "NVMe", 00:12:59.949 "listen_addresses": [ 00:12:59.949 { 00:12:59.949 "trtype": "TCP", 00:12:59.949 "adrfam": "IPv4", 00:12:59.949 "traddr": "10.0.0.2", 00:12:59.949 "trsvcid": "4420" 00:12:59.949 } 00:12:59.949 ], 00:12:59.949 "allow_any_host": true, 00:12:59.949 "hosts": [], 00:12:59.949 "serial_number": "SPDK00000000000001", 00:12:59.949 "model_number": "SPDK bdev Controller", 00:12:59.949 "max_namespaces": 32, 00:12:59.949 "min_cntlid": 1, 00:12:59.949 "max_cntlid": 65519, 00:12:59.949 "namespaces": [ 00:12:59.949 { 00:12:59.949 "nsid": 1, 00:12:59.949 "bdev_name": "Null1", 00:12:59.949 "name": "Null1", 00:12:59.949 "nguid": "0F13C56E7F334CD3BD0A4DE23FEBE90A", 00:12:59.949 "uuid": "0f13c56e-7f33-4cd3-bd0a-4de23febe90a" 00:12:59.949 } 00:12:59.949 ] 00:12:59.949 }, 00:12:59.949 { 00:12:59.949 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:59.949 "subtype": "NVMe", 00:12:59.949 "listen_addresses": [ 00:12:59.949 { 00:12:59.949 "trtype": "TCP", 00:12:59.949 "adrfam": "IPv4", 00:12:59.949 "traddr": "10.0.0.2", 00:12:59.949 "trsvcid": "4420" 00:12:59.949 } 00:12:59.949 ], 00:12:59.949 "allow_any_host": true, 00:12:59.949 "hosts": [], 00:12:59.949 "serial_number": "SPDK00000000000002", 00:12:59.949 "model_number": "SPDK bdev Controller", 00:12:59.949 "max_namespaces": 32, 00:12:59.949 "min_cntlid": 1, 00:12:59.949 "max_cntlid": 65519, 00:12:59.949 "namespaces": [ 00:12:59.949 { 00:12:59.949 "nsid": 1, 00:12:59.949 "bdev_name": "Null2", 00:12:59.949 "name": "Null2", 00:12:59.949 "nguid": "230DF423647F4F7DA18C9E8AB69FCA30", 00:12:59.949 "uuid": "230df423-647f-4f7d-a18c-9e8ab69fca30" 00:12:59.949 } 00:12:59.949 ] 00:12:59.949 }, 00:12:59.949 { 00:12:59.949 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:59.949 "subtype": "NVMe", 00:12:59.949 "listen_addresses": [ 00:12:59.949 { 00:12:59.949 "trtype": "TCP", 00:12:59.949 "adrfam": "IPv4", 00:12:59.949 "traddr": "10.0.0.2", 00:12:59.949 "trsvcid": "4420" 00:12:59.949 } 00:12:59.949 ], 00:12:59.949 "allow_any_host": true, 00:12:59.949 "hosts": [], 00:12:59.949 "serial_number": "SPDK00000000000003", 00:12:59.949 "model_number": "SPDK bdev Controller", 00:12:59.949 "max_namespaces": 32, 00:12:59.949 "min_cntlid": 1, 00:12:59.949 "max_cntlid": 65519, 00:12:59.949 "namespaces": [ 00:12:59.949 { 00:12:59.949 "nsid": 1, 00:12:59.949 "bdev_name": "Null3", 00:12:59.949 "name": "Null3", 00:12:59.949 "nguid": "F9C04C8DB961471BA39CB4465AB328AF", 00:12:59.949 "uuid": "f9c04c8d-b961-471b-a39c-b4465ab328af" 00:12:59.949 } 00:12:59.949 ] 00:12:59.949 }, 00:12:59.949 { 00:12:59.949 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:59.949 "subtype": "NVMe", 00:12:59.949 "listen_addresses": [ 00:12:59.949 { 00:12:59.949 "trtype": "TCP", 00:12:59.949 "adrfam": "IPv4", 00:12:59.949 "traddr": "10.0.0.2", 00:12:59.949 "trsvcid": "4420" 00:12:59.949 } 00:12:59.949 ], 00:12:59.949 "allow_any_host": true, 00:12:59.949 "hosts": [], 00:12:59.949 "serial_number": "SPDK00000000000004", 00:12:59.949 "model_number": "SPDK bdev Controller", 00:12:59.949 "max_namespaces": 32, 00:12:59.949 "min_cntlid": 1, 00:12:59.949 "max_cntlid": 65519, 00:12:59.949 "namespaces": [ 00:12:59.949 { 00:12:59.949 "nsid": 1, 00:12:59.949 "bdev_name": "Null4", 00:12:59.949 "name": "Null4", 00:12:59.949 "nguid": "21520C04FB504AB3B5743D2D567A0DDF", 00:12:59.949 "uuid": "21520c04-fb50-4ab3-b574-3d2d567a0ddf" 00:12:59.949 } 00:12:59.949 ] 00:12:59.949 } 00:12:59.949 ] 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.949 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.950 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:00.209 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.209 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:00.209 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.209 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:00.209 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:00.210 rmmod nvme_tcp 00:13:00.210 rmmod nvme_fabrics 00:13:00.210 rmmod nvme_keyring 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4052308 ']' 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4052308 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 4052308 ']' 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 4052308 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4052308 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4052308' 00:13:00.210 killing process with pid 4052308 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 4052308 00:13:00.210 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 4052308 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.470 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.007 00:13:03.007 real 0m9.954s 00:13:03.007 user 0m8.289s 00:13:03.007 sys 0m4.873s 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.007 ************************************ 00:13:03.007 END TEST nvmf_target_discovery 00:13:03.007 ************************************ 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.007 ************************************ 00:13:03.007 START TEST nvmf_referrals 00:13:03.007 ************************************ 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:03.007 * Looking for test storage... 00:13:03.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:03.007 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:03.008 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:08.284 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:08.284 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:08.284 Found net devices under 0000:af:00.0: cvl_0_0 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:08.284 Found net devices under 0000:af:00.1: cvl_0_1 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.284 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.285 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:08.544 00:13:08.544 --- 10.0.0.2 ping statistics --- 00:13:08.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.544 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:13:08.544 00:13:08.544 --- 10.0.0.1 ping statistics --- 00:13:08.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.544 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4056321 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4056321 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 4056321 ']' 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.544 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:08.544 [2024-07-25 11:59:45.835078] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:13:08.544 [2024-07-25 11:59:45.835184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.805 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.805 [2024-07-25 11:59:45.966945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.805 [2024-07-25 11:59:46.062165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.805 [2024-07-25 11:59:46.062202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.805 [2024-07-25 11:59:46.062213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.805 [2024-07-25 11:59:46.062222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.805 [2024-07-25 11:59:46.062229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.805 [2024-07-25 11:59:46.062297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.805 [2024-07-25 11:59:46.062336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.805 [2024-07-25 11:59:46.062447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.805 [2024-07-25 11:59:46.062447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 [2024-07-25 11:59:46.847858] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 [2024-07-25 11:59:46.868069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:09.781 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.041 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:10.300 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.301 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:10.560 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:10.820 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:10.820 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:10.820 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:10.820 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:10.820 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:10.820 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.820 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:10.820 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:10.820 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:10.820 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:10.820 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:10.820 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.820 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:11.079 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.339 rmmod nvme_tcp 00:13:11.339 rmmod nvme_fabrics 00:13:11.339 rmmod nvme_keyring 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4056321 ']' 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4056321 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 4056321 ']' 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 4056321 00:13:11.339 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:11.340 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.340 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4056321 00:13:11.340 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.340 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.340 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4056321' 00:13:11.340 killing process with pid 4056321 00:13:11.340 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 4056321 00:13:11.340 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 4056321 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.599 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.504 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.504 00:13:13.504 real 0m11.033s 00:13:13.504 user 0m13.320s 00:13:13.504 sys 0m5.168s 00:13:13.504 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.504 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.504 ************************************ 00:13:13.504 END TEST nvmf_referrals 00:13:13.504 ************************************ 00:13:13.762 11:59:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:13.762 11:59:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.762 11:59:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.762 11:59:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.762 ************************************ 00:13:13.762 START TEST nvmf_connect_disconnect 00:13:13.762 ************************************ 00:13:13.762 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:13.762 * Looking for test storage... 00:13:13.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.762 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.762 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.763 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:20.329 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:20.329 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:20.329 Found net devices under 0000:af:00.0: cvl_0_0 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:20.329 Found net devices under 0000:af:00.1: cvl_0_1 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.329 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:13:20.330 00:13:20.330 --- 10.0.0.2 ping statistics --- 00:13:20.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.330 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:13:20.330 00:13:20.330 --- 10.0.0.1 ping statistics --- 00:13:20.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.330 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4060398 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4060398 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 4060398 ']' 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.330 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.330 [2024-07-25 11:59:56.728693] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:13:20.330 [2024-07-25 11:59:56.728758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.330 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.330 [2024-07-25 11:59:56.819060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.330 [2024-07-25 11:59:56.912937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.330 [2024-07-25 11:59:56.912979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.330 [2024-07-25 11:59:56.912989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.330 [2024-07-25 11:59:56.912998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.330 [2024-07-25 11:59:56.913006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.330 [2024-07-25 11:59:56.913054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.330 [2024-07-25 11:59:56.913167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.330 [2024-07-25 11:59:56.913278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.330 [2024-07-25 11:59:56.913278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.330 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.330 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:20.330 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.330 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.330 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.588 [2024-07-25 11:59:57.653724] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.588 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.589 [2024-07-25 11:59:57.713570] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:20.589 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:23.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.991 rmmod nvme_tcp 00:13:37.991 rmmod nvme_fabrics 00:13:37.991 rmmod nvme_keyring 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4060398 ']' 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4060398 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 4060398 ']' 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 4060398 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4060398 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4060398' 00:13:37.991 killing process with pid 4060398 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 4060398 00:13:37.991 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 4060398 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.250 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.781 00:13:40.781 real 0m26.723s 00:13:40.781 user 1m15.336s 00:13:40.781 sys 0m5.664s 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.781 ************************************ 00:13:40.781 END TEST nvmf_connect_disconnect 00:13:40.781 ************************************ 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.781 ************************************ 00:13:40.781 START TEST nvmf_multitarget 00:13:40.781 ************************************ 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:40.781 * Looking for test storage... 00:13:40.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.781 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.107 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:46.365 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:46.365 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:46.365 Found net devices under 0000:af:00.0: cvl_0_0 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:46.365 Found net devices under 0000:af:00.1: cvl_0_1 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.365 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.366 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.623 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:13:46.623 00:13:46.623 --- 10.0.0.2 ping statistics --- 00:13:46.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.624 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:13:46.624 00:13:46.624 --- 10.0.0.1 ping statistics --- 00:13:46.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.624 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4067893 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4067893 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 4067893 ']' 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.624 12:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:46.624 [2024-07-25 12:00:23.790219] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:13:46.624 [2024-07-25 12:00:23.790279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.624 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.624 [2024-07-25 12:00:23.877870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.882 [2024-07-25 12:00:23.971009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.882 [2024-07-25 12:00:23.971050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.882 [2024-07-25 12:00:23.971061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.882 [2024-07-25 12:00:23.971070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.882 [2024-07-25 12:00:23.971077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.882 [2024-07-25 12:00:23.971135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.882 [2024-07-25 12:00:23.971245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.882 [2024-07-25 12:00:23.971356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.882 [2024-07-25 12:00:23.971356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.450 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.450 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:47.450 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.450 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.450 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:47.709 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.709 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:47.709 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:47.709 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:47.709 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:47.709 12:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:47.966 "nvmf_tgt_1" 00:13:47.966 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:47.966 "nvmf_tgt_2" 00:13:47.966 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:47.966 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:47.966 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:47.966 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:48.231 true 00:13:48.231 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:48.231 true 00:13:48.231 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:48.231 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.493 rmmod nvme_tcp 00:13:48.493 rmmod nvme_fabrics 00:13:48.493 rmmod nvme_keyring 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4067893 ']' 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4067893 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 4067893 ']' 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 4067893 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4067893 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4067893' 00:13:48.493 killing process with pid 4067893 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 4067893 00:13:48.493 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 4067893 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.752 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.285 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.285 00:13:51.285 real 0m10.327s 00:13:51.285 user 0m10.331s 00:13:51.285 sys 0m5.046s 00:13:51.286 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.286 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:51.286 ************************************ 00:13:51.286 END TEST nvmf_multitarget 00:13:51.286 ************************************ 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.286 ************************************ 00:13:51.286 START TEST nvmf_rpc 00:13:51.286 ************************************ 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:51.286 * Looking for test storage... 00:13:51.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.286 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.857 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.857 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:57.858 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:57.858 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:57.858 Found net devices under 0000:af:00.0: cvl_0_0 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:57.858 Found net devices under 0000:af:00.1: cvl_0_1 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.858 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.859 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:13:57.859 00:13:57.859 --- 10.0.0.2 ping statistics --- 00:13:57.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.859 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:13:57.859 00:13:57.859 --- 10.0.0.1 ping statistics --- 00:13:57.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.859 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4071902 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4071902 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 4071902 ']' 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.859 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.859 [2024-07-25 12:00:34.283146] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:13:57.859 [2024-07-25 12:00:34.283201] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.859 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.859 [2024-07-25 12:00:34.373877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.859 [2024-07-25 12:00:34.464505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.859 [2024-07-25 12:00:34.464551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.859 [2024-07-25 12:00:34.464562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.859 [2024-07-25 12:00:34.464572] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.859 [2024-07-25 12:00:34.464582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.859 [2024-07-25 12:00:34.464645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.859 [2024-07-25 12:00:34.464681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.859 [2024-07-25 12:00:34.464793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.859 [2024-07-25 12:00:34.464793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.120 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:58.121 "tick_rate": 2200000000, 00:13:58.121 "poll_groups": [ 00:13:58.121 { 00:13:58.121 "name": "nvmf_tgt_poll_group_000", 00:13:58.121 "admin_qpairs": 0, 00:13:58.121 "io_qpairs": 0, 00:13:58.121 "current_admin_qpairs": 0, 00:13:58.121 "current_io_qpairs": 0, 00:13:58.121 "pending_bdev_io": 0, 00:13:58.121 "completed_nvme_io": 0, 00:13:58.121 "transports": [] 00:13:58.121 }, 00:13:58.121 { 00:13:58.121 "name": "nvmf_tgt_poll_group_001", 00:13:58.121 "admin_qpairs": 0, 00:13:58.121 "io_qpairs": 0, 00:13:58.121 "current_admin_qpairs": 0, 00:13:58.121 "current_io_qpairs": 0, 00:13:58.121 "pending_bdev_io": 0, 00:13:58.121 "completed_nvme_io": 0, 00:13:58.121 "transports": [] 00:13:58.121 }, 00:13:58.121 { 00:13:58.121 "name": "nvmf_tgt_poll_group_002", 00:13:58.121 "admin_qpairs": 0, 00:13:58.121 "io_qpairs": 0, 00:13:58.121 "current_admin_qpairs": 0, 00:13:58.121 "current_io_qpairs": 0, 00:13:58.121 "pending_bdev_io": 0, 00:13:58.121 "completed_nvme_io": 0, 00:13:58.121 "transports": [] 00:13:58.121 }, 00:13:58.121 { 00:13:58.121 "name": "nvmf_tgt_poll_group_003", 00:13:58.121 "admin_qpairs": 0, 00:13:58.121 "io_qpairs": 0, 00:13:58.121 "current_admin_qpairs": 0, 00:13:58.121 "current_io_qpairs": 0, 00:13:58.121 "pending_bdev_io": 0, 00:13:58.121 "completed_nvme_io": 0, 00:13:58.121 "transports": [] 00:13:58.121 } 00:13:58.121 ] 00:13:58.121 }' 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.121 [2024-07-25 12:00:35.398831] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.121 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:58.384 "tick_rate": 2200000000, 00:13:58.384 "poll_groups": [ 00:13:58.384 { 00:13:58.384 "name": "nvmf_tgt_poll_group_000", 00:13:58.384 "admin_qpairs": 0, 00:13:58.384 "io_qpairs": 0, 00:13:58.384 "current_admin_qpairs": 0, 00:13:58.384 "current_io_qpairs": 0, 00:13:58.384 "pending_bdev_io": 0, 00:13:58.384 "completed_nvme_io": 0, 00:13:58.384 "transports": [ 00:13:58.384 { 00:13:58.384 "trtype": "TCP" 00:13:58.384 } 00:13:58.384 ] 00:13:58.384 }, 00:13:58.384 { 00:13:58.384 "name": "nvmf_tgt_poll_group_001", 00:13:58.384 "admin_qpairs": 0, 00:13:58.384 "io_qpairs": 0, 00:13:58.384 "current_admin_qpairs": 0, 00:13:58.384 "current_io_qpairs": 0, 00:13:58.384 "pending_bdev_io": 0, 00:13:58.384 "completed_nvme_io": 0, 00:13:58.384 "transports": [ 00:13:58.384 { 00:13:58.384 "trtype": "TCP" 00:13:58.384 } 00:13:58.384 ] 00:13:58.384 }, 00:13:58.384 { 00:13:58.384 "name": "nvmf_tgt_poll_group_002", 00:13:58.384 "admin_qpairs": 0, 00:13:58.384 "io_qpairs": 0, 00:13:58.384 "current_admin_qpairs": 0, 00:13:58.384 "current_io_qpairs": 0, 00:13:58.384 "pending_bdev_io": 0, 00:13:58.384 "completed_nvme_io": 0, 00:13:58.384 "transports": [ 00:13:58.384 { 00:13:58.384 "trtype": "TCP" 00:13:58.384 } 00:13:58.384 ] 00:13:58.384 }, 00:13:58.384 { 00:13:58.384 "name": "nvmf_tgt_poll_group_003", 00:13:58.384 "admin_qpairs": 0, 00:13:58.384 "io_qpairs": 0, 00:13:58.384 "current_admin_qpairs": 0, 00:13:58.384 "current_io_qpairs": 0, 00:13:58.384 "pending_bdev_io": 0, 00:13:58.384 "completed_nvme_io": 0, 00:13:58.384 "transports": [ 00:13:58.384 { 00:13:58.384 "trtype": "TCP" 00:13:58.384 } 00:13:58.384 ] 00:13:58.384 } 00:13:58.384 ] 00:13:58.384 }' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.384 Malloc1 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.384 [2024-07-25 12:00:35.591308] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:58.384 [2024-07-25 12:00:35.615967] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:13:58.384 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:58.384 could not add new controller: failed to write to nvme-fabrics device 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.384 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.901 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.901 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.901 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.901 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:59.901 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:01.799 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:01.800 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:01.800 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.800 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:01.800 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.800 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:01.800 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:01.800 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.800 [2024-07-25 12:00:39.089878] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:14:02.058 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:02.058 could not add new controller: failed to write to nvme-fabrics device 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.058 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:03.432 12:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:03.432 12:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:03.432 12:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.432 12:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:03.432 12:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.338 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.596 [2024-07-25 12:00:42.642353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.596 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.985 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.985 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.985 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.985 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:06.985 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:08.886 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:08.886 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:08.886 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.886 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:08.886 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.886 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:08.886 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.886 [2024-07-25 12:00:46.136082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.886 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.262 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.262 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:10.262 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.262 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:10.262 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:12.165 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:12.165 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:12.165 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.423 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.424 [2024-07-25 12:00:49.604944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.424 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.797 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.797 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:13.797 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.797 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:13.797 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.695 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.695 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.695 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.695 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:15.695 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.695 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:15.695 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.954 [2024-07-25 12:00:53.102496] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.954 12:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.326 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.326 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:17.326 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.326 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:17.326 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:19.227 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:19.227 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:19.227 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.227 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:19.227 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.227 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:19.227 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.484 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 [2024-07-25 12:00:56.619245] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 12:00:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.860 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.860 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.861 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.861 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:20.861 12:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:22.761 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:22.761 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:22.761 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.761 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:22.761 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.761 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:22.761 12:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.018 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 [2024-07-25 12:01:00.136993] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 [2024-07-25 12:01:00.185135] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 [2024-07-25 12:01:00.237300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 [2024-07-25 12:01:00.285477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.019 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 [2024-07-25 12:01:00.333712] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:23.279 "tick_rate": 2200000000, 00:14:23.279 "poll_groups": [ 00:14:23.279 { 00:14:23.279 "name": "nvmf_tgt_poll_group_000", 00:14:23.279 "admin_qpairs": 2, 00:14:23.279 "io_qpairs": 196, 00:14:23.279 "current_admin_qpairs": 0, 00:14:23.279 "current_io_qpairs": 0, 00:14:23.279 "pending_bdev_io": 0, 00:14:23.279 "completed_nvme_io": 296, 00:14:23.279 "transports": [ 00:14:23.279 { 00:14:23.279 "trtype": "TCP" 00:14:23.279 } 00:14:23.279 ] 00:14:23.279 }, 00:14:23.279 { 00:14:23.279 "name": "nvmf_tgt_poll_group_001", 00:14:23.279 "admin_qpairs": 2, 00:14:23.279 "io_qpairs": 196, 00:14:23.279 "current_admin_qpairs": 0, 00:14:23.279 "current_io_qpairs": 0, 00:14:23.279 "pending_bdev_io": 0, 00:14:23.279 "completed_nvme_io": 248, 00:14:23.279 "transports": [ 00:14:23.279 { 00:14:23.279 "trtype": "TCP" 00:14:23.279 } 00:14:23.279 ] 00:14:23.279 }, 00:14:23.279 { 00:14:23.279 "name": "nvmf_tgt_poll_group_002", 00:14:23.279 "admin_qpairs": 1, 00:14:23.279 "io_qpairs": 196, 00:14:23.279 "current_admin_qpairs": 0, 00:14:23.279 "current_io_qpairs": 0, 00:14:23.279 "pending_bdev_io": 0, 00:14:23.279 "completed_nvme_io": 295, 00:14:23.279 "transports": [ 00:14:23.279 { 00:14:23.279 "trtype": "TCP" 00:14:23.279 } 00:14:23.279 ] 00:14:23.279 }, 00:14:23.279 { 00:14:23.279 "name": "nvmf_tgt_poll_group_003", 00:14:23.279 "admin_qpairs": 2, 00:14:23.279 "io_qpairs": 196, 00:14:23.279 "current_admin_qpairs": 0, 00:14:23.279 "current_io_qpairs": 0, 00:14:23.279 "pending_bdev_io": 0, 00:14:23.279 "completed_nvme_io": 295, 00:14:23.279 "transports": [ 00:14:23.279 { 00:14:23.279 "trtype": "TCP" 00:14:23.279 } 00:14:23.279 ] 00:14:23.279 } 00:14:23.279 ] 00:14:23.279 }' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.279 rmmod nvme_tcp 00:14:23.279 rmmod nvme_fabrics 00:14:23.279 rmmod nvme_keyring 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4071902 ']' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4071902 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 4071902 ']' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 4071902 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.279 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4071902 00:14:23.539 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.539 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.539 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4071902' 00:14:23.539 killing process with pid 4071902 00:14:23.539 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 4071902 00:14:23.539 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 4071902 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.797 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.697 00:14:25.697 real 0m34.841s 00:14:25.697 user 1m46.653s 00:14:25.697 sys 0m6.599s 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.697 ************************************ 00:14:25.697 END TEST nvmf_rpc 00:14:25.697 ************************************ 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.697 ************************************ 00:14:25.697 START TEST nvmf_invalid 00:14:25.697 ************************************ 00:14:25.697 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:25.956 * Looking for test storage... 00:14:25.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.956 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.957 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:31.351 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:31.351 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.351 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:31.352 Found net devices under 0000:af:00.0: cvl_0_0 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:31.352 Found net devices under 0000:af:00.1: cvl_0_1 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.352 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:14:31.611 00:14:31.611 --- 10.0.0.2 ping statistics --- 00:14:31.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.611 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:14:31.611 00:14:31.611 --- 10.0.0.1 ping statistics --- 00:14:31.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.611 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.611 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4080433 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4080433 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 4080433 ']' 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.870 12:01:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.870 [2024-07-25 12:01:09.005544] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:14:31.870 [2024-07-25 12:01:09.005609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.870 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.870 [2024-07-25 12:01:09.093282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.128 [2024-07-25 12:01:09.186483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.128 [2024-07-25 12:01:09.186525] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.128 [2024-07-25 12:01:09.186535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.128 [2024-07-25 12:01:09.186544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.128 [2024-07-25 12:01:09.186552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.128 [2024-07-25 12:01:09.186609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.128 [2024-07-25 12:01:09.186712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.128 [2024-07-25 12:01:09.186823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.128 [2024-07-25 12:01:09.186824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.695 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.695 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:32.695 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.695 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:32.695 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:32.695 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.695 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:32.696 12:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode366 00:14:32.954 [2024-07-25 12:01:10.140653] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:32.954 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:32.954 { 00:14:32.954 "nqn": "nqn.2016-06.io.spdk:cnode366", 00:14:32.954 "tgt_name": "foobar", 00:14:32.954 "method": "nvmf_create_subsystem", 00:14:32.954 "req_id": 1 00:14:32.954 } 00:14:32.954 Got JSON-RPC error response 00:14:32.954 response: 00:14:32.954 { 00:14:32.954 "code": -32603, 00:14:32.954 "message": "Unable to find target foobar" 00:14:32.954 }' 00:14:32.954 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:32.954 { 00:14:32.954 "nqn": "nqn.2016-06.io.spdk:cnode366", 00:14:32.954 "tgt_name": "foobar", 00:14:32.954 "method": "nvmf_create_subsystem", 00:14:32.954 "req_id": 1 00:14:32.954 } 00:14:32.954 Got JSON-RPC error response 00:14:32.954 response: 00:14:32.954 { 00:14:32.954 "code": -32603, 00:14:32.954 "message": "Unable to find target foobar" 00:14:32.954 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:32.954 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:32.954 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23899 00:14:33.212 [2024-07-25 12:01:10.405741] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23899: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:33.212 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:33.212 { 00:14:33.212 "nqn": "nqn.2016-06.io.spdk:cnode23899", 00:14:33.212 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:33.212 "method": "nvmf_create_subsystem", 00:14:33.212 "req_id": 1 00:14:33.212 } 00:14:33.212 Got JSON-RPC error response 00:14:33.212 response: 00:14:33.212 { 00:14:33.212 "code": -32602, 00:14:33.212 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:33.212 }' 00:14:33.212 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:33.212 { 00:14:33.212 "nqn": "nqn.2016-06.io.spdk:cnode23899", 00:14:33.212 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:33.212 "method": "nvmf_create_subsystem", 00:14:33.212 "req_id": 1 00:14:33.212 } 00:14:33.212 Got JSON-RPC error response 00:14:33.212 response: 00:14:33.212 { 00:14:33.212 "code": -32602, 00:14:33.212 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:33.212 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:33.212 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:33.212 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18825 00:14:33.470 [2024-07-25 12:01:10.674663] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18825: invalid model number 'SPDK_Controller' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:33.470 { 00:14:33.470 "nqn": "nqn.2016-06.io.spdk:cnode18825", 00:14:33.470 "model_number": "SPDK_Controller\u001f", 00:14:33.470 "method": "nvmf_create_subsystem", 00:14:33.470 "req_id": 1 00:14:33.470 } 00:14:33.470 Got JSON-RPC error response 00:14:33.470 response: 00:14:33.470 { 00:14:33.470 "code": -32602, 00:14:33.470 "message": "Invalid MN SPDK_Controller\u001f" 00:14:33.470 }' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:33.470 { 00:14:33.470 "nqn": "nqn.2016-06.io.spdk:cnode18825", 00:14:33.470 "model_number": "SPDK_Controller\u001f", 00:14:33.470 "method": "nvmf_create_subsystem", 00:14:33.470 "req_id": 1 00:14:33.470 } 00:14:33.470 Got JSON-RPC error response 00:14:33.470 response: 00:14:33.470 { 00:14:33.470 "code": -32602, 00:14:33.470 "message": "Invalid MN SPDK_Controller\u001f" 00:14:33.470 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:33.470 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo kc=LI-Ul3nqKxDvIQQ#nL 00:14:33.729 12:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s kc=LI-Ul3nqKxDvIQQ#nL nqn.2016-06.io.spdk:cnode4564 00:14:34.002 [2024-07-25 12:01:11.076379] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4564: invalid serial number 'kc=LI-Ul3nqKxDvIQQ#nL' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:34.002 { 00:14:34.002 "nqn": "nqn.2016-06.io.spdk:cnode4564", 00:14:34.002 "serial_number": "kc=LI-Ul3nqKxDvIQQ#nL", 00:14:34.002 "method": "nvmf_create_subsystem", 00:14:34.002 "req_id": 1 00:14:34.002 } 00:14:34.002 Got JSON-RPC error response 00:14:34.002 response: 00:14:34.002 { 00:14:34.002 "code": -32602, 00:14:34.002 "message": "Invalid SN kc=LI-Ul3nqKxDvIQQ#nL" 00:14:34.002 }' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:34.002 { 00:14:34.002 "nqn": "nqn.2016-06.io.spdk:cnode4564", 00:14:34.002 "serial_number": "kc=LI-Ul3nqKxDvIQQ#nL", 00:14:34.002 "method": "nvmf_create_subsystem", 00:14:34.002 "req_id": 1 00:14:34.002 } 00:14:34.002 Got JSON-RPC error response 00:14:34.002 response: 00:14:34.002 { 00:14:34.002 "code": -32602, 00:14:34.002 "message": "Invalid SN kc=LI-Ul3nqKxDvIQQ#nL" 00:14:34.002 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:34.002 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.003 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:34.264 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\'\'' p87>iuM|g1D$S~,}nElOA@,|?xZn7Dw'\'''\''$^KrO' 00:14:34.265 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\'\'' p87>iuM|g1D$S~,}nElOA@,|?xZn7Dw'\'''\''$^KrO' nqn.2016-06.io.spdk:cnode20489 00:14:34.524 [2024-07-25 12:01:11.602361] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20489: invalid model number '\' p87>iuM|g1D$S~,}nElOA@,|?xZn7Dw''$^KrO' 00:14:34.524 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:34.524 { 00:14:34.524 "nqn": "nqn.2016-06.io.spdk:cnode20489", 00:14:34.524 "model_number": "\\'\'' p87>iuM|g1D$S~,}nElOA@,|?xZn7Dw'\'''\''$^KrO", 00:14:34.524 "method": "nvmf_create_subsystem", 00:14:34.524 "req_id": 1 00:14:34.524 } 00:14:34.524 Got JSON-RPC error response 00:14:34.524 response: 00:14:34.524 { 00:14:34.524 "code": -32602, 00:14:34.524 "message": "Invalid MN \\'\'' p87>iuM|g1D$S~,}nElOA@,|?xZn7Dw'\'''\''$^KrO" 00:14:34.524 }' 00:14:34.524 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:34.524 { 00:14:34.524 "nqn": "nqn.2016-06.io.spdk:cnode20489", 00:14:34.524 "model_number": "\\' p87>iuM|g1D$S~,}nElOA@,|?xZn7Dw''$^KrO", 00:14:34.524 "method": "nvmf_create_subsystem", 00:14:34.524 "req_id": 1 00:14:34.524 } 00:14:34.524 Got JSON-RPC error response 00:14:34.524 response: 00:14:34.524 { 00:14:34.524 "code": -32602, 00:14:34.524 "message": "Invalid MN \\' p87>iuM|g1D$S~,}nElOA@,|?xZn7Dw''$^KrO" 00:14:34.524 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:34.524 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:34.782 [2024-07-25 12:01:11.867479] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.782 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:35.040 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:35.040 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:35.040 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:35.040 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:35.040 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:35.298 [2024-07-25 12:01:12.397515] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:35.298 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:35.298 { 00:14:35.298 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:35.298 "listen_address": { 00:14:35.298 "trtype": "tcp", 00:14:35.298 "traddr": "", 00:14:35.298 "trsvcid": "4421" 00:14:35.298 }, 00:14:35.298 "method": "nvmf_subsystem_remove_listener", 00:14:35.298 "req_id": 1 00:14:35.298 } 00:14:35.298 Got JSON-RPC error response 00:14:35.298 response: 00:14:35.298 { 00:14:35.298 "code": -32602, 00:14:35.298 "message": "Invalid parameters" 00:14:35.298 }' 00:14:35.298 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:35.298 { 00:14:35.298 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:35.298 "listen_address": { 00:14:35.298 "trtype": "tcp", 00:14:35.298 "traddr": "", 00:14:35.298 "trsvcid": "4421" 00:14:35.298 }, 00:14:35.298 "method": "nvmf_subsystem_remove_listener", 00:14:35.298 "req_id": 1 00:14:35.298 } 00:14:35.298 Got JSON-RPC error response 00:14:35.298 response: 00:14:35.298 { 00:14:35.298 "code": -32602, 00:14:35.298 "message": "Invalid parameters" 00:14:35.298 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:35.299 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32551 -i 0 00:14:35.299 [2024-07-25 12:01:12.578102] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32551: invalid cntlid range [0-65519] 00:14:35.556 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:35.556 { 00:14:35.556 "nqn": "nqn.2016-06.io.spdk:cnode32551", 00:14:35.556 "min_cntlid": 0, 00:14:35.556 "method": "nvmf_create_subsystem", 00:14:35.556 "req_id": 1 00:14:35.556 } 00:14:35.556 Got JSON-RPC error response 00:14:35.556 response: 00:14:35.556 { 00:14:35.556 "code": -32602, 00:14:35.556 "message": "Invalid cntlid range [0-65519]" 00:14:35.556 }' 00:14:35.556 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:35.556 { 00:14:35.556 "nqn": "nqn.2016-06.io.spdk:cnode32551", 00:14:35.556 "min_cntlid": 0, 00:14:35.556 "method": "nvmf_create_subsystem", 00:14:35.557 "req_id": 1 00:14:35.557 } 00:14:35.557 Got JSON-RPC error response 00:14:35.557 response: 00:14:35.557 { 00:14:35.557 "code": -32602, 00:14:35.557 "message": "Invalid cntlid range [0-65519]" 00:14:35.557 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.557 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23137 -i 65520 00:14:35.557 [2024-07-25 12:01:12.742707] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23137: invalid cntlid range [65520-65519] 00:14:35.557 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:35.557 { 00:14:35.557 "nqn": "nqn.2016-06.io.spdk:cnode23137", 00:14:35.557 "min_cntlid": 65520, 00:14:35.557 "method": "nvmf_create_subsystem", 00:14:35.557 "req_id": 1 00:14:35.557 } 00:14:35.557 Got JSON-RPC error response 00:14:35.557 response: 00:14:35.557 { 00:14:35.557 "code": -32602, 00:14:35.557 "message": "Invalid cntlid range [65520-65519]" 00:14:35.557 }' 00:14:35.557 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:35.557 { 00:14:35.557 "nqn": "nqn.2016-06.io.spdk:cnode23137", 00:14:35.557 "min_cntlid": 65520, 00:14:35.557 "method": "nvmf_create_subsystem", 00:14:35.557 "req_id": 1 00:14:35.557 } 00:14:35.557 Got JSON-RPC error response 00:14:35.557 response: 00:14:35.557 { 00:14:35.557 "code": -32602, 00:14:35.557 "message": "Invalid cntlid range [65520-65519]" 00:14:35.557 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.557 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16941 -I 0 00:14:35.814 [2024-07-25 12:01:12.907305] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16941: invalid cntlid range [1-0] 00:14:35.814 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:35.814 { 00:14:35.814 "nqn": "nqn.2016-06.io.spdk:cnode16941", 00:14:35.814 "max_cntlid": 0, 00:14:35.814 "method": "nvmf_create_subsystem", 00:14:35.814 "req_id": 1 00:14:35.814 } 00:14:35.814 Got JSON-RPC error response 00:14:35.814 response: 00:14:35.814 { 00:14:35.814 "code": -32602, 00:14:35.814 "message": "Invalid cntlid range [1-0]" 00:14:35.814 }' 00:14:35.814 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:35.814 { 00:14:35.814 "nqn": "nqn.2016-06.io.spdk:cnode16941", 00:14:35.814 "max_cntlid": 0, 00:14:35.814 "method": "nvmf_create_subsystem", 00:14:35.814 "req_id": 1 00:14:35.814 } 00:14:35.814 Got JSON-RPC error response 00:14:35.814 response: 00:14:35.814 { 00:14:35.814 "code": -32602, 00:14:35.814 "message": "Invalid cntlid range [1-0]" 00:14:35.814 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.814 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20468 -I 65520 00:14:36.072 [2024-07-25 12:01:13.172300] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20468: invalid cntlid range [1-65520] 00:14:36.072 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:36.072 { 00:14:36.072 "nqn": "nqn.2016-06.io.spdk:cnode20468", 00:14:36.072 "max_cntlid": 65520, 00:14:36.072 "method": "nvmf_create_subsystem", 00:14:36.072 "req_id": 1 00:14:36.072 } 00:14:36.072 Got JSON-RPC error response 00:14:36.072 response: 00:14:36.072 { 00:14:36.072 "code": -32602, 00:14:36.072 "message": "Invalid cntlid range [1-65520]" 00:14:36.072 }' 00:14:36.072 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:36.072 { 00:14:36.072 "nqn": "nqn.2016-06.io.spdk:cnode20468", 00:14:36.072 "max_cntlid": 65520, 00:14:36.072 "method": "nvmf_create_subsystem", 00:14:36.072 "req_id": 1 00:14:36.072 } 00:14:36.072 Got JSON-RPC error response 00:14:36.072 response: 00:14:36.072 { 00:14:36.072 "code": -32602, 00:14:36.072 "message": "Invalid cntlid range [1-65520]" 00:14:36.072 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:36.072 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6127 -i 6 -I 5 00:14:36.331 [2024-07-25 12:01:13.437336] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6127: invalid cntlid range [6-5] 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:36.331 { 00:14:36.331 "nqn": "nqn.2016-06.io.spdk:cnode6127", 00:14:36.331 "min_cntlid": 6, 00:14:36.331 "max_cntlid": 5, 00:14:36.331 "method": "nvmf_create_subsystem", 00:14:36.331 "req_id": 1 00:14:36.331 } 00:14:36.331 Got JSON-RPC error response 00:14:36.331 response: 00:14:36.331 { 00:14:36.331 "code": -32602, 00:14:36.331 "message": "Invalid cntlid range [6-5]" 00:14:36.331 }' 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:36.331 { 00:14:36.331 "nqn": "nqn.2016-06.io.spdk:cnode6127", 00:14:36.331 "min_cntlid": 6, 00:14:36.331 "max_cntlid": 5, 00:14:36.331 "method": "nvmf_create_subsystem", 00:14:36.331 "req_id": 1 00:14:36.331 } 00:14:36.331 Got JSON-RPC error response 00:14:36.331 response: 00:14:36.331 { 00:14:36.331 "code": -32602, 00:14:36.331 "message": "Invalid cntlid range [6-5]" 00:14:36.331 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:36.331 { 00:14:36.331 "name": "foobar", 00:14:36.331 "method": "nvmf_delete_target", 00:14:36.331 "req_id": 1 00:14:36.331 } 00:14:36.331 Got JSON-RPC error response 00:14:36.331 response: 00:14:36.331 { 00:14:36.331 "code": -32602, 00:14:36.331 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:36.331 }' 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:36.331 { 00:14:36.331 "name": "foobar", 00:14:36.331 "method": "nvmf_delete_target", 00:14:36.331 "req_id": 1 00:14:36.331 } 00:14:36.331 Got JSON-RPC error response 00:14:36.331 response: 00:14:36.331 { 00:14:36.331 "code": -32602, 00:14:36.331 "message": "The specified target doesn't exist, cannot delete it." 00:14:36.331 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.331 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.331 rmmod nvme_tcp 00:14:36.590 rmmod nvme_fabrics 00:14:36.590 rmmod nvme_keyring 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 4080433 ']' 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 4080433 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 4080433 ']' 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 4080433 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4080433 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4080433' 00:14:36.590 killing process with pid 4080433 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 4080433 00:14:36.590 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 4080433 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.849 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.747 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.747 00:14:38.747 real 0m13.013s 00:14:38.747 user 0m23.427s 00:14:38.747 sys 0m5.545s 00:14:38.747 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.747 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.747 ************************************ 00:14:38.747 END TEST nvmf_invalid 00:14:38.747 ************************************ 00:14:38.748 12:01:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:38.748 12:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:38.748 12:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.748 12:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:39.006 ************************************ 00:14:39.006 START TEST nvmf_connect_stress 00:14:39.006 ************************************ 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:39.006 * Looking for test storage... 00:14:39.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.006 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.007 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.584 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:45.585 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:45.585 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:45.585 Found net devices under 0000:af:00.0: cvl_0_0 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:45.585 Found net devices under 0000:af:00.1: cvl_0_1 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.585 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:14:45.585 00:14:45.585 --- 10.0.0.2 ping statistics --- 00:14:45.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.585 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:14:45.585 00:14:45.585 --- 10.0.0.1 ping statistics --- 00:14:45.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.585 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4084994 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4084994 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 4084994 ']' 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.585 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.585 [2024-07-25 12:01:22.127754] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:14:45.585 [2024-07-25 12:01:22.127816] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.585 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.585 [2024-07-25 12:01:22.216994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:45.585 [2024-07-25 12:01:22.354958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.586 [2024-07-25 12:01:22.355013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.586 [2024-07-25 12:01:22.355027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.586 [2024-07-25 12:01:22.355039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.586 [2024-07-25 12:01:22.355050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.586 [2024-07-25 12:01:22.357653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.586 [2024-07-25 12:01:22.357755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.586 [2024-07-25 12:01:22.357759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.586 [2024-07-25 12:01:22.533314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.586 [2024-07-25 12:01:22.568432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.586 NULL1 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4085215 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.586 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.845 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.845 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:45.845 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.845 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.845 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.102 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.102 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:46.102 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.102 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.102 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.361 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.361 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:46.361 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.361 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.361 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.928 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.928 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:46.928 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.928 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.928 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.186 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.186 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:47.186 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.186 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.186 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.444 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.444 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:47.444 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.444 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.444 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.700 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.700 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:47.700 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.700 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.700 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.266 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.266 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:48.267 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.267 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.267 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.527 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.527 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:48.527 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.527 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.527 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.786 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.786 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:48.786 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.786 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.786 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.045 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.045 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:49.045 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.045 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.045 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.303 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.303 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:49.303 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.303 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.303 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.870 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.870 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:49.870 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.870 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.870 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.128 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.128 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:50.128 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.128 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.128 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.386 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.386 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:50.386 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.386 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.386 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.644 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.644 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:50.644 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.644 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.644 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.903 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:50.903 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.903 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.903 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.470 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.470 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:51.470 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.470 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.470 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.728 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.728 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:51.728 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.728 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.728 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.986 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.986 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:51.986 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.986 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.986 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.245 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.245 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:52.245 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.245 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.245 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.812 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.812 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:52.812 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.812 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.813 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.071 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.071 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:53.071 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.071 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.071 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.329 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.329 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:53.329 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.329 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.329 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.585 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.585 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:53.585 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.585 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.585 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.843 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.843 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:53.843 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.843 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.843 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.410 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.410 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:54.410 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.410 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.410 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.668 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.668 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:54.668 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.668 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.668 12:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.926 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.926 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:54.926 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.926 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.926 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.184 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.184 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:55.184 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.184 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.184 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.442 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.442 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:55.701 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.701 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.701 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.701 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4085215 00:14:55.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4085215) - No such process 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4085215 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.960 rmmod nvme_tcp 00:14:55.960 rmmod nvme_fabrics 00:14:55.960 rmmod nvme_keyring 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4084994 ']' 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4084994 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 4084994 ']' 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 4084994 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4084994 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4084994' 00:14:55.960 killing process with pid 4084994 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 4084994 00:14:55.960 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 4084994 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.220 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.772 00:14:58.772 real 0m19.479s 00:14:58.772 user 0m41.035s 00:14:58.772 sys 0m8.316s 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.772 ************************************ 00:14:58.772 END TEST nvmf_connect_stress 00:14:58.772 ************************************ 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.772 ************************************ 00:14:58.772 START TEST nvmf_fused_ordering 00:14:58.772 ************************************ 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:58.772 * Looking for test storage... 00:14:58.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.772 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.773 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.044 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:04.045 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:04.045 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:04.045 Found net devices under 0000:af:00.0: cvl_0_0 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:04.045 Found net devices under 0000:af:00.1: cvl_0_1 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.045 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.304 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:15:04.563 00:15:04.563 --- 10.0.0.2 ping statistics --- 00:15:04.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.563 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:15:04.563 00:15:04.563 --- 10.0.0.1 ping statistics --- 00:15:04.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.563 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4090716 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4090716 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 4090716 ']' 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.563 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 [2024-07-25 12:01:41.723591] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:04.563 [2024-07-25 12:01:41.723661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.563 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.563 [2024-07-25 12:01:41.812465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.823 [2024-07-25 12:01:41.917814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.823 [2024-07-25 12:01:41.917859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.823 [2024-07-25 12:01:41.917872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.823 [2024-07-25 12:01:41.917883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.823 [2024-07-25 12:01:41.917893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.823 [2024-07-25 12:01:41.917924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.426 [2024-07-25 12:01:42.710439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.426 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.685 [2024-07-25 12:01:42.734629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.685 NULL1 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.685 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:05.685 [2024-07-25 12:01:42.788410] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:05.685 [2024-07-25 12:01:42.788451] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090824 ] 00:15:05.685 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.252 Attached to nqn.2016-06.io.spdk:cnode1 00:15:06.252 Namespace ID: 1 size: 1GB 00:15:06.252 fused_ordering(0) 00:15:06.252 fused_ordering(1) 00:15:06.252 fused_ordering(2) 00:15:06.252 fused_ordering(3) 00:15:06.252 fused_ordering(4) 00:15:06.252 fused_ordering(5) 00:15:06.252 fused_ordering(6) 00:15:06.252 fused_ordering(7) 00:15:06.252 fused_ordering(8) 00:15:06.252 fused_ordering(9) 00:15:06.252 fused_ordering(10) 00:15:06.252 fused_ordering(11) 00:15:06.252 fused_ordering(12) 00:15:06.252 fused_ordering(13) 00:15:06.252 fused_ordering(14) 00:15:06.252 fused_ordering(15) 00:15:06.252 fused_ordering(16) 00:15:06.252 fused_ordering(17) 00:15:06.252 fused_ordering(18) 00:15:06.252 fused_ordering(19) 00:15:06.252 fused_ordering(20) 00:15:06.252 fused_ordering(21) 00:15:06.252 fused_ordering(22) 00:15:06.252 fused_ordering(23) 00:15:06.252 fused_ordering(24) 00:15:06.252 fused_ordering(25) 00:15:06.252 fused_ordering(26) 00:15:06.252 fused_ordering(27) 00:15:06.252 fused_ordering(28) 00:15:06.252 fused_ordering(29) 00:15:06.252 fused_ordering(30) 00:15:06.252 fused_ordering(31) 00:15:06.252 fused_ordering(32) 00:15:06.252 fused_ordering(33) 00:15:06.252 fused_ordering(34) 00:15:06.252 fused_ordering(35) 00:15:06.252 fused_ordering(36) 00:15:06.252 fused_ordering(37) 00:15:06.252 fused_ordering(38) 00:15:06.252 fused_ordering(39) 00:15:06.252 fused_ordering(40) 00:15:06.252 fused_ordering(41) 00:15:06.252 fused_ordering(42) 00:15:06.252 fused_ordering(43) 00:15:06.252 fused_ordering(44) 00:15:06.252 fused_ordering(45) 00:15:06.252 fused_ordering(46) 00:15:06.252 fused_ordering(47) 00:15:06.252 fused_ordering(48) 00:15:06.252 fused_ordering(49) 00:15:06.252 fused_ordering(50) 00:15:06.252 fused_ordering(51) 00:15:06.252 fused_ordering(52) 00:15:06.252 fused_ordering(53) 00:15:06.252 fused_ordering(54) 00:15:06.252 fused_ordering(55) 00:15:06.252 fused_ordering(56) 00:15:06.252 fused_ordering(57) 00:15:06.252 fused_ordering(58) 00:15:06.252 fused_ordering(59) 00:15:06.252 fused_ordering(60) 00:15:06.252 fused_ordering(61) 00:15:06.252 fused_ordering(62) 00:15:06.252 fused_ordering(63) 00:15:06.252 fused_ordering(64) 00:15:06.252 fused_ordering(65) 00:15:06.252 fused_ordering(66) 00:15:06.252 fused_ordering(67) 00:15:06.252 fused_ordering(68) 00:15:06.252 fused_ordering(69) 00:15:06.252 fused_ordering(70) 00:15:06.252 fused_ordering(71) 00:15:06.252 fused_ordering(72) 00:15:06.252 fused_ordering(73) 00:15:06.252 fused_ordering(74) 00:15:06.252 fused_ordering(75) 00:15:06.252 fused_ordering(76) 00:15:06.252 fused_ordering(77) 00:15:06.252 fused_ordering(78) 00:15:06.252 fused_ordering(79) 00:15:06.252 fused_ordering(80) 00:15:06.252 fused_ordering(81) 00:15:06.252 fused_ordering(82) 00:15:06.252 fused_ordering(83) 00:15:06.252 fused_ordering(84) 00:15:06.252 fused_ordering(85) 00:15:06.252 fused_ordering(86) 00:15:06.252 fused_ordering(87) 00:15:06.252 fused_ordering(88) 00:15:06.252 fused_ordering(89) 00:15:06.252 fused_ordering(90) 00:15:06.252 fused_ordering(91) 00:15:06.252 fused_ordering(92) 00:15:06.252 fused_ordering(93) 00:15:06.252 fused_ordering(94) 00:15:06.252 fused_ordering(95) 00:15:06.252 fused_ordering(96) 00:15:06.252 fused_ordering(97) 00:15:06.252 fused_ordering(98) 00:15:06.252 fused_ordering(99) 00:15:06.252 fused_ordering(100) 00:15:06.252 fused_ordering(101) 00:15:06.252 fused_ordering(102) 00:15:06.252 fused_ordering(103) 00:15:06.252 fused_ordering(104) 00:15:06.252 fused_ordering(105) 00:15:06.252 fused_ordering(106) 00:15:06.252 fused_ordering(107) 00:15:06.252 fused_ordering(108) 00:15:06.252 fused_ordering(109) 00:15:06.252 fused_ordering(110) 00:15:06.252 fused_ordering(111) 00:15:06.252 fused_ordering(112) 00:15:06.252 fused_ordering(113) 00:15:06.252 fused_ordering(114) 00:15:06.252 fused_ordering(115) 00:15:06.252 fused_ordering(116) 00:15:06.252 fused_ordering(117) 00:15:06.252 fused_ordering(118) 00:15:06.252 fused_ordering(119) 00:15:06.252 fused_ordering(120) 00:15:06.252 fused_ordering(121) 00:15:06.252 fused_ordering(122) 00:15:06.252 fused_ordering(123) 00:15:06.252 fused_ordering(124) 00:15:06.252 fused_ordering(125) 00:15:06.252 fused_ordering(126) 00:15:06.252 fused_ordering(127) 00:15:06.252 fused_ordering(128) 00:15:06.252 fused_ordering(129) 00:15:06.252 fused_ordering(130) 00:15:06.252 fused_ordering(131) 00:15:06.252 fused_ordering(132) 00:15:06.252 fused_ordering(133) 00:15:06.252 fused_ordering(134) 00:15:06.252 fused_ordering(135) 00:15:06.252 fused_ordering(136) 00:15:06.252 fused_ordering(137) 00:15:06.252 fused_ordering(138) 00:15:06.252 fused_ordering(139) 00:15:06.252 fused_ordering(140) 00:15:06.252 fused_ordering(141) 00:15:06.252 fused_ordering(142) 00:15:06.252 fused_ordering(143) 00:15:06.252 fused_ordering(144) 00:15:06.252 fused_ordering(145) 00:15:06.252 fused_ordering(146) 00:15:06.252 fused_ordering(147) 00:15:06.252 fused_ordering(148) 00:15:06.252 fused_ordering(149) 00:15:06.252 fused_ordering(150) 00:15:06.252 fused_ordering(151) 00:15:06.252 fused_ordering(152) 00:15:06.252 fused_ordering(153) 00:15:06.252 fused_ordering(154) 00:15:06.252 fused_ordering(155) 00:15:06.252 fused_ordering(156) 00:15:06.252 fused_ordering(157) 00:15:06.252 fused_ordering(158) 00:15:06.252 fused_ordering(159) 00:15:06.252 fused_ordering(160) 00:15:06.252 fused_ordering(161) 00:15:06.252 fused_ordering(162) 00:15:06.252 fused_ordering(163) 00:15:06.252 fused_ordering(164) 00:15:06.252 fused_ordering(165) 00:15:06.252 fused_ordering(166) 00:15:06.252 fused_ordering(167) 00:15:06.252 fused_ordering(168) 00:15:06.252 fused_ordering(169) 00:15:06.252 fused_ordering(170) 00:15:06.252 fused_ordering(171) 00:15:06.252 fused_ordering(172) 00:15:06.252 fused_ordering(173) 00:15:06.252 fused_ordering(174) 00:15:06.252 fused_ordering(175) 00:15:06.252 fused_ordering(176) 00:15:06.252 fused_ordering(177) 00:15:06.252 fused_ordering(178) 00:15:06.252 fused_ordering(179) 00:15:06.252 fused_ordering(180) 00:15:06.252 fused_ordering(181) 00:15:06.252 fused_ordering(182) 00:15:06.252 fused_ordering(183) 00:15:06.252 fused_ordering(184) 00:15:06.252 fused_ordering(185) 00:15:06.252 fused_ordering(186) 00:15:06.252 fused_ordering(187) 00:15:06.252 fused_ordering(188) 00:15:06.252 fused_ordering(189) 00:15:06.252 fused_ordering(190) 00:15:06.252 fused_ordering(191) 00:15:06.252 fused_ordering(192) 00:15:06.252 fused_ordering(193) 00:15:06.252 fused_ordering(194) 00:15:06.252 fused_ordering(195) 00:15:06.252 fused_ordering(196) 00:15:06.252 fused_ordering(197) 00:15:06.252 fused_ordering(198) 00:15:06.252 fused_ordering(199) 00:15:06.252 fused_ordering(200) 00:15:06.252 fused_ordering(201) 00:15:06.252 fused_ordering(202) 00:15:06.252 fused_ordering(203) 00:15:06.252 fused_ordering(204) 00:15:06.252 fused_ordering(205) 00:15:06.511 fused_ordering(206) 00:15:06.511 fused_ordering(207) 00:15:06.511 fused_ordering(208) 00:15:06.511 fused_ordering(209) 00:15:06.511 fused_ordering(210) 00:15:06.511 fused_ordering(211) 00:15:06.511 fused_ordering(212) 00:15:06.511 fused_ordering(213) 00:15:06.511 fused_ordering(214) 00:15:06.511 fused_ordering(215) 00:15:06.511 fused_ordering(216) 00:15:06.511 fused_ordering(217) 00:15:06.511 fused_ordering(218) 00:15:06.511 fused_ordering(219) 00:15:06.511 fused_ordering(220) 00:15:06.511 fused_ordering(221) 00:15:06.511 fused_ordering(222) 00:15:06.511 fused_ordering(223) 00:15:06.511 fused_ordering(224) 00:15:06.511 fused_ordering(225) 00:15:06.511 fused_ordering(226) 00:15:06.511 fused_ordering(227) 00:15:06.511 fused_ordering(228) 00:15:06.511 fused_ordering(229) 00:15:06.511 fused_ordering(230) 00:15:06.511 fused_ordering(231) 00:15:06.511 fused_ordering(232) 00:15:06.511 fused_ordering(233) 00:15:06.511 fused_ordering(234) 00:15:06.511 fused_ordering(235) 00:15:06.511 fused_ordering(236) 00:15:06.511 fused_ordering(237) 00:15:06.511 fused_ordering(238) 00:15:06.511 fused_ordering(239) 00:15:06.511 fused_ordering(240) 00:15:06.511 fused_ordering(241) 00:15:06.511 fused_ordering(242) 00:15:06.511 fused_ordering(243) 00:15:06.511 fused_ordering(244) 00:15:06.511 fused_ordering(245) 00:15:06.511 fused_ordering(246) 00:15:06.511 fused_ordering(247) 00:15:06.511 fused_ordering(248) 00:15:06.511 fused_ordering(249) 00:15:06.511 fused_ordering(250) 00:15:06.511 fused_ordering(251) 00:15:06.511 fused_ordering(252) 00:15:06.511 fused_ordering(253) 00:15:06.511 fused_ordering(254) 00:15:06.511 fused_ordering(255) 00:15:06.511 fused_ordering(256) 00:15:06.511 fused_ordering(257) 00:15:06.511 fused_ordering(258) 00:15:06.511 fused_ordering(259) 00:15:06.511 fused_ordering(260) 00:15:06.511 fused_ordering(261) 00:15:06.511 fused_ordering(262) 00:15:06.511 fused_ordering(263) 00:15:06.511 fused_ordering(264) 00:15:06.511 fused_ordering(265) 00:15:06.511 fused_ordering(266) 00:15:06.511 fused_ordering(267) 00:15:06.511 fused_ordering(268) 00:15:06.511 fused_ordering(269) 00:15:06.511 fused_ordering(270) 00:15:06.511 fused_ordering(271) 00:15:06.511 fused_ordering(272) 00:15:06.511 fused_ordering(273) 00:15:06.511 fused_ordering(274) 00:15:06.511 fused_ordering(275) 00:15:06.511 fused_ordering(276) 00:15:06.511 fused_ordering(277) 00:15:06.511 fused_ordering(278) 00:15:06.511 fused_ordering(279) 00:15:06.511 fused_ordering(280) 00:15:06.511 fused_ordering(281) 00:15:06.511 fused_ordering(282) 00:15:06.511 fused_ordering(283) 00:15:06.511 fused_ordering(284) 00:15:06.511 fused_ordering(285) 00:15:06.511 fused_ordering(286) 00:15:06.511 fused_ordering(287) 00:15:06.511 fused_ordering(288) 00:15:06.511 fused_ordering(289) 00:15:06.511 fused_ordering(290) 00:15:06.511 fused_ordering(291) 00:15:06.511 fused_ordering(292) 00:15:06.511 fused_ordering(293) 00:15:06.511 fused_ordering(294) 00:15:06.511 fused_ordering(295) 00:15:06.511 fused_ordering(296) 00:15:06.511 fused_ordering(297) 00:15:06.511 fused_ordering(298) 00:15:06.511 fused_ordering(299) 00:15:06.511 fused_ordering(300) 00:15:06.511 fused_ordering(301) 00:15:06.511 fused_ordering(302) 00:15:06.511 fused_ordering(303) 00:15:06.511 fused_ordering(304) 00:15:06.511 fused_ordering(305) 00:15:06.511 fused_ordering(306) 00:15:06.511 fused_ordering(307) 00:15:06.511 fused_ordering(308) 00:15:06.511 fused_ordering(309) 00:15:06.511 fused_ordering(310) 00:15:06.511 fused_ordering(311) 00:15:06.511 fused_ordering(312) 00:15:06.511 fused_ordering(313) 00:15:06.511 fused_ordering(314) 00:15:06.511 fused_ordering(315) 00:15:06.511 fused_ordering(316) 00:15:06.511 fused_ordering(317) 00:15:06.511 fused_ordering(318) 00:15:06.511 fused_ordering(319) 00:15:06.511 fused_ordering(320) 00:15:06.511 fused_ordering(321) 00:15:06.511 fused_ordering(322) 00:15:06.512 fused_ordering(323) 00:15:06.512 fused_ordering(324) 00:15:06.512 fused_ordering(325) 00:15:06.512 fused_ordering(326) 00:15:06.512 fused_ordering(327) 00:15:06.512 fused_ordering(328) 00:15:06.512 fused_ordering(329) 00:15:06.512 fused_ordering(330) 00:15:06.512 fused_ordering(331) 00:15:06.512 fused_ordering(332) 00:15:06.512 fused_ordering(333) 00:15:06.512 fused_ordering(334) 00:15:06.512 fused_ordering(335) 00:15:06.512 fused_ordering(336) 00:15:06.512 fused_ordering(337) 00:15:06.512 fused_ordering(338) 00:15:06.512 fused_ordering(339) 00:15:06.512 fused_ordering(340) 00:15:06.512 fused_ordering(341) 00:15:06.512 fused_ordering(342) 00:15:06.512 fused_ordering(343) 00:15:06.512 fused_ordering(344) 00:15:06.512 fused_ordering(345) 00:15:06.512 fused_ordering(346) 00:15:06.512 fused_ordering(347) 00:15:06.512 fused_ordering(348) 00:15:06.512 fused_ordering(349) 00:15:06.512 fused_ordering(350) 00:15:06.512 fused_ordering(351) 00:15:06.512 fused_ordering(352) 00:15:06.512 fused_ordering(353) 00:15:06.512 fused_ordering(354) 00:15:06.512 fused_ordering(355) 00:15:06.512 fused_ordering(356) 00:15:06.512 fused_ordering(357) 00:15:06.512 fused_ordering(358) 00:15:06.512 fused_ordering(359) 00:15:06.512 fused_ordering(360) 00:15:06.512 fused_ordering(361) 00:15:06.512 fused_ordering(362) 00:15:06.512 fused_ordering(363) 00:15:06.512 fused_ordering(364) 00:15:06.512 fused_ordering(365) 00:15:06.512 fused_ordering(366) 00:15:06.512 fused_ordering(367) 00:15:06.512 fused_ordering(368) 00:15:06.512 fused_ordering(369) 00:15:06.512 fused_ordering(370) 00:15:06.512 fused_ordering(371) 00:15:06.512 fused_ordering(372) 00:15:06.512 fused_ordering(373) 00:15:06.512 fused_ordering(374) 00:15:06.512 fused_ordering(375) 00:15:06.512 fused_ordering(376) 00:15:06.512 fused_ordering(377) 00:15:06.512 fused_ordering(378) 00:15:06.512 fused_ordering(379) 00:15:06.512 fused_ordering(380) 00:15:06.512 fused_ordering(381) 00:15:06.512 fused_ordering(382) 00:15:06.512 fused_ordering(383) 00:15:06.512 fused_ordering(384) 00:15:06.512 fused_ordering(385) 00:15:06.512 fused_ordering(386) 00:15:06.512 fused_ordering(387) 00:15:06.512 fused_ordering(388) 00:15:06.512 fused_ordering(389) 00:15:06.512 fused_ordering(390) 00:15:06.512 fused_ordering(391) 00:15:06.512 fused_ordering(392) 00:15:06.512 fused_ordering(393) 00:15:06.512 fused_ordering(394) 00:15:06.512 fused_ordering(395) 00:15:06.512 fused_ordering(396) 00:15:06.512 fused_ordering(397) 00:15:06.512 fused_ordering(398) 00:15:06.512 fused_ordering(399) 00:15:06.512 fused_ordering(400) 00:15:06.512 fused_ordering(401) 00:15:06.512 fused_ordering(402) 00:15:06.512 fused_ordering(403) 00:15:06.512 fused_ordering(404) 00:15:06.512 fused_ordering(405) 00:15:06.512 fused_ordering(406) 00:15:06.512 fused_ordering(407) 00:15:06.512 fused_ordering(408) 00:15:06.512 fused_ordering(409) 00:15:06.512 fused_ordering(410) 00:15:07.079 fused_ordering(411) 00:15:07.079 fused_ordering(412) 00:15:07.079 fused_ordering(413) 00:15:07.079 fused_ordering(414) 00:15:07.079 fused_ordering(415) 00:15:07.079 fused_ordering(416) 00:15:07.079 fused_ordering(417) 00:15:07.079 fused_ordering(418) 00:15:07.079 fused_ordering(419) 00:15:07.079 fused_ordering(420) 00:15:07.079 fused_ordering(421) 00:15:07.079 fused_ordering(422) 00:15:07.079 fused_ordering(423) 00:15:07.079 fused_ordering(424) 00:15:07.079 fused_ordering(425) 00:15:07.079 fused_ordering(426) 00:15:07.079 fused_ordering(427) 00:15:07.079 fused_ordering(428) 00:15:07.079 fused_ordering(429) 00:15:07.079 fused_ordering(430) 00:15:07.079 fused_ordering(431) 00:15:07.079 fused_ordering(432) 00:15:07.079 fused_ordering(433) 00:15:07.079 fused_ordering(434) 00:15:07.079 fused_ordering(435) 00:15:07.079 fused_ordering(436) 00:15:07.079 fused_ordering(437) 00:15:07.079 fused_ordering(438) 00:15:07.079 fused_ordering(439) 00:15:07.079 fused_ordering(440) 00:15:07.079 fused_ordering(441) 00:15:07.079 fused_ordering(442) 00:15:07.079 fused_ordering(443) 00:15:07.079 fused_ordering(444) 00:15:07.079 fused_ordering(445) 00:15:07.079 fused_ordering(446) 00:15:07.079 fused_ordering(447) 00:15:07.079 fused_ordering(448) 00:15:07.079 fused_ordering(449) 00:15:07.079 fused_ordering(450) 00:15:07.079 fused_ordering(451) 00:15:07.079 fused_ordering(452) 00:15:07.079 fused_ordering(453) 00:15:07.079 fused_ordering(454) 00:15:07.079 fused_ordering(455) 00:15:07.079 fused_ordering(456) 00:15:07.079 fused_ordering(457) 00:15:07.079 fused_ordering(458) 00:15:07.079 fused_ordering(459) 00:15:07.079 fused_ordering(460) 00:15:07.079 fused_ordering(461) 00:15:07.079 fused_ordering(462) 00:15:07.079 fused_ordering(463) 00:15:07.079 fused_ordering(464) 00:15:07.079 fused_ordering(465) 00:15:07.079 fused_ordering(466) 00:15:07.079 fused_ordering(467) 00:15:07.079 fused_ordering(468) 00:15:07.079 fused_ordering(469) 00:15:07.079 fused_ordering(470) 00:15:07.079 fused_ordering(471) 00:15:07.079 fused_ordering(472) 00:15:07.079 fused_ordering(473) 00:15:07.079 fused_ordering(474) 00:15:07.079 fused_ordering(475) 00:15:07.079 fused_ordering(476) 00:15:07.079 fused_ordering(477) 00:15:07.079 fused_ordering(478) 00:15:07.079 fused_ordering(479) 00:15:07.079 fused_ordering(480) 00:15:07.079 fused_ordering(481) 00:15:07.079 fused_ordering(482) 00:15:07.079 fused_ordering(483) 00:15:07.079 fused_ordering(484) 00:15:07.079 fused_ordering(485) 00:15:07.079 fused_ordering(486) 00:15:07.079 fused_ordering(487) 00:15:07.079 fused_ordering(488) 00:15:07.079 fused_ordering(489) 00:15:07.079 fused_ordering(490) 00:15:07.079 fused_ordering(491) 00:15:07.079 fused_ordering(492) 00:15:07.079 fused_ordering(493) 00:15:07.079 fused_ordering(494) 00:15:07.079 fused_ordering(495) 00:15:07.079 fused_ordering(496) 00:15:07.079 fused_ordering(497) 00:15:07.079 fused_ordering(498) 00:15:07.079 fused_ordering(499) 00:15:07.079 fused_ordering(500) 00:15:07.079 fused_ordering(501) 00:15:07.079 fused_ordering(502) 00:15:07.079 fused_ordering(503) 00:15:07.079 fused_ordering(504) 00:15:07.079 fused_ordering(505) 00:15:07.079 fused_ordering(506) 00:15:07.079 fused_ordering(507) 00:15:07.079 fused_ordering(508) 00:15:07.079 fused_ordering(509) 00:15:07.079 fused_ordering(510) 00:15:07.079 fused_ordering(511) 00:15:07.079 fused_ordering(512) 00:15:07.079 fused_ordering(513) 00:15:07.079 fused_ordering(514) 00:15:07.079 fused_ordering(515) 00:15:07.079 fused_ordering(516) 00:15:07.079 fused_ordering(517) 00:15:07.079 fused_ordering(518) 00:15:07.079 fused_ordering(519) 00:15:07.079 fused_ordering(520) 00:15:07.079 fused_ordering(521) 00:15:07.079 fused_ordering(522) 00:15:07.079 fused_ordering(523) 00:15:07.080 fused_ordering(524) 00:15:07.080 fused_ordering(525) 00:15:07.080 fused_ordering(526) 00:15:07.080 fused_ordering(527) 00:15:07.080 fused_ordering(528) 00:15:07.080 fused_ordering(529) 00:15:07.080 fused_ordering(530) 00:15:07.080 fused_ordering(531) 00:15:07.080 fused_ordering(532) 00:15:07.080 fused_ordering(533) 00:15:07.080 fused_ordering(534) 00:15:07.080 fused_ordering(535) 00:15:07.080 fused_ordering(536) 00:15:07.080 fused_ordering(537) 00:15:07.080 fused_ordering(538) 00:15:07.080 fused_ordering(539) 00:15:07.080 fused_ordering(540) 00:15:07.080 fused_ordering(541) 00:15:07.080 fused_ordering(542) 00:15:07.080 fused_ordering(543) 00:15:07.080 fused_ordering(544) 00:15:07.080 fused_ordering(545) 00:15:07.080 fused_ordering(546) 00:15:07.080 fused_ordering(547) 00:15:07.080 fused_ordering(548) 00:15:07.080 fused_ordering(549) 00:15:07.080 fused_ordering(550) 00:15:07.080 fused_ordering(551) 00:15:07.080 fused_ordering(552) 00:15:07.080 fused_ordering(553) 00:15:07.080 fused_ordering(554) 00:15:07.080 fused_ordering(555) 00:15:07.080 fused_ordering(556) 00:15:07.080 fused_ordering(557) 00:15:07.080 fused_ordering(558) 00:15:07.080 fused_ordering(559) 00:15:07.080 fused_ordering(560) 00:15:07.080 fused_ordering(561) 00:15:07.080 fused_ordering(562) 00:15:07.080 fused_ordering(563) 00:15:07.080 fused_ordering(564) 00:15:07.080 fused_ordering(565) 00:15:07.080 fused_ordering(566) 00:15:07.080 fused_ordering(567) 00:15:07.080 fused_ordering(568) 00:15:07.080 fused_ordering(569) 00:15:07.080 fused_ordering(570) 00:15:07.080 fused_ordering(571) 00:15:07.080 fused_ordering(572) 00:15:07.080 fused_ordering(573) 00:15:07.080 fused_ordering(574) 00:15:07.080 fused_ordering(575) 00:15:07.080 fused_ordering(576) 00:15:07.080 fused_ordering(577) 00:15:07.080 fused_ordering(578) 00:15:07.080 fused_ordering(579) 00:15:07.080 fused_ordering(580) 00:15:07.080 fused_ordering(581) 00:15:07.080 fused_ordering(582) 00:15:07.080 fused_ordering(583) 00:15:07.080 fused_ordering(584) 00:15:07.080 fused_ordering(585) 00:15:07.080 fused_ordering(586) 00:15:07.080 fused_ordering(587) 00:15:07.080 fused_ordering(588) 00:15:07.080 fused_ordering(589) 00:15:07.080 fused_ordering(590) 00:15:07.080 fused_ordering(591) 00:15:07.080 fused_ordering(592) 00:15:07.080 fused_ordering(593) 00:15:07.080 fused_ordering(594) 00:15:07.080 fused_ordering(595) 00:15:07.080 fused_ordering(596) 00:15:07.080 fused_ordering(597) 00:15:07.080 fused_ordering(598) 00:15:07.080 fused_ordering(599) 00:15:07.080 fused_ordering(600) 00:15:07.080 fused_ordering(601) 00:15:07.080 fused_ordering(602) 00:15:07.080 fused_ordering(603) 00:15:07.080 fused_ordering(604) 00:15:07.080 fused_ordering(605) 00:15:07.080 fused_ordering(606) 00:15:07.080 fused_ordering(607) 00:15:07.080 fused_ordering(608) 00:15:07.080 fused_ordering(609) 00:15:07.080 fused_ordering(610) 00:15:07.080 fused_ordering(611) 00:15:07.080 fused_ordering(612) 00:15:07.080 fused_ordering(613) 00:15:07.080 fused_ordering(614) 00:15:07.080 fused_ordering(615) 00:15:07.647 fused_ordering(616) 00:15:07.647 fused_ordering(617) 00:15:07.647 fused_ordering(618) 00:15:07.647 fused_ordering(619) 00:15:07.647 fused_ordering(620) 00:15:07.647 fused_ordering(621) 00:15:07.647 fused_ordering(622) 00:15:07.647 fused_ordering(623) 00:15:07.647 fused_ordering(624) 00:15:07.647 fused_ordering(625) 00:15:07.647 fused_ordering(626) 00:15:07.647 fused_ordering(627) 00:15:07.647 fused_ordering(628) 00:15:07.647 fused_ordering(629) 00:15:07.647 fused_ordering(630) 00:15:07.647 fused_ordering(631) 00:15:07.647 fused_ordering(632) 00:15:07.647 fused_ordering(633) 00:15:07.647 fused_ordering(634) 00:15:07.647 fused_ordering(635) 00:15:07.647 fused_ordering(636) 00:15:07.647 fused_ordering(637) 00:15:07.647 fused_ordering(638) 00:15:07.647 fused_ordering(639) 00:15:07.647 fused_ordering(640) 00:15:07.647 fused_ordering(641) 00:15:07.647 fused_ordering(642) 00:15:07.647 fused_ordering(643) 00:15:07.647 fused_ordering(644) 00:15:07.647 fused_ordering(645) 00:15:07.647 fused_ordering(646) 00:15:07.647 fused_ordering(647) 00:15:07.647 fused_ordering(648) 00:15:07.647 fused_ordering(649) 00:15:07.647 fused_ordering(650) 00:15:07.647 fused_ordering(651) 00:15:07.647 fused_ordering(652) 00:15:07.647 fused_ordering(653) 00:15:07.647 fused_ordering(654) 00:15:07.647 fused_ordering(655) 00:15:07.647 fused_ordering(656) 00:15:07.647 fused_ordering(657) 00:15:07.647 fused_ordering(658) 00:15:07.647 fused_ordering(659) 00:15:07.647 fused_ordering(660) 00:15:07.647 fused_ordering(661) 00:15:07.647 fused_ordering(662) 00:15:07.647 fused_ordering(663) 00:15:07.647 fused_ordering(664) 00:15:07.647 fused_ordering(665) 00:15:07.647 fused_ordering(666) 00:15:07.647 fused_ordering(667) 00:15:07.647 fused_ordering(668) 00:15:07.647 fused_ordering(669) 00:15:07.647 fused_ordering(670) 00:15:07.647 fused_ordering(671) 00:15:07.647 fused_ordering(672) 00:15:07.647 fused_ordering(673) 00:15:07.647 fused_ordering(674) 00:15:07.647 fused_ordering(675) 00:15:07.647 fused_ordering(676) 00:15:07.647 fused_ordering(677) 00:15:07.647 fused_ordering(678) 00:15:07.647 fused_ordering(679) 00:15:07.647 fused_ordering(680) 00:15:07.647 fused_ordering(681) 00:15:07.647 fused_ordering(682) 00:15:07.647 fused_ordering(683) 00:15:07.647 fused_ordering(684) 00:15:07.647 fused_ordering(685) 00:15:07.647 fused_ordering(686) 00:15:07.647 fused_ordering(687) 00:15:07.647 fused_ordering(688) 00:15:07.647 fused_ordering(689) 00:15:07.647 fused_ordering(690) 00:15:07.647 fused_ordering(691) 00:15:07.647 fused_ordering(692) 00:15:07.647 fused_ordering(693) 00:15:07.647 fused_ordering(694) 00:15:07.647 fused_ordering(695) 00:15:07.647 fused_ordering(696) 00:15:07.647 fused_ordering(697) 00:15:07.647 fused_ordering(698) 00:15:07.647 fused_ordering(699) 00:15:07.647 fused_ordering(700) 00:15:07.647 fused_ordering(701) 00:15:07.647 fused_ordering(702) 00:15:07.647 fused_ordering(703) 00:15:07.647 fused_ordering(704) 00:15:07.647 fused_ordering(705) 00:15:07.647 fused_ordering(706) 00:15:07.647 fused_ordering(707) 00:15:07.647 fused_ordering(708) 00:15:07.647 fused_ordering(709) 00:15:07.647 fused_ordering(710) 00:15:07.647 fused_ordering(711) 00:15:07.647 fused_ordering(712) 00:15:07.647 fused_ordering(713) 00:15:07.647 fused_ordering(714) 00:15:07.647 fused_ordering(715) 00:15:07.647 fused_ordering(716) 00:15:07.647 fused_ordering(717) 00:15:07.647 fused_ordering(718) 00:15:07.647 fused_ordering(719) 00:15:07.647 fused_ordering(720) 00:15:07.647 fused_ordering(721) 00:15:07.647 fused_ordering(722) 00:15:07.647 fused_ordering(723) 00:15:07.647 fused_ordering(724) 00:15:07.647 fused_ordering(725) 00:15:07.647 fused_ordering(726) 00:15:07.647 fused_ordering(727) 00:15:07.647 fused_ordering(728) 00:15:07.647 fused_ordering(729) 00:15:07.647 fused_ordering(730) 00:15:07.647 fused_ordering(731) 00:15:07.647 fused_ordering(732) 00:15:07.647 fused_ordering(733) 00:15:07.647 fused_ordering(734) 00:15:07.647 fused_ordering(735) 00:15:07.647 fused_ordering(736) 00:15:07.647 fused_ordering(737) 00:15:07.647 fused_ordering(738) 00:15:07.647 fused_ordering(739) 00:15:07.647 fused_ordering(740) 00:15:07.647 fused_ordering(741) 00:15:07.647 fused_ordering(742) 00:15:07.647 fused_ordering(743) 00:15:07.647 fused_ordering(744) 00:15:07.647 fused_ordering(745) 00:15:07.647 fused_ordering(746) 00:15:07.647 fused_ordering(747) 00:15:07.647 fused_ordering(748) 00:15:07.647 fused_ordering(749) 00:15:07.647 fused_ordering(750) 00:15:07.647 fused_ordering(751) 00:15:07.647 fused_ordering(752) 00:15:07.647 fused_ordering(753) 00:15:07.647 fused_ordering(754) 00:15:07.647 fused_ordering(755) 00:15:07.647 fused_ordering(756) 00:15:07.647 fused_ordering(757) 00:15:07.647 fused_ordering(758) 00:15:07.647 fused_ordering(759) 00:15:07.647 fused_ordering(760) 00:15:07.647 fused_ordering(761) 00:15:07.647 fused_ordering(762) 00:15:07.647 fused_ordering(763) 00:15:07.647 fused_ordering(764) 00:15:07.647 fused_ordering(765) 00:15:07.647 fused_ordering(766) 00:15:07.647 fused_ordering(767) 00:15:07.647 fused_ordering(768) 00:15:07.647 fused_ordering(769) 00:15:07.647 fused_ordering(770) 00:15:07.647 fused_ordering(771) 00:15:07.647 fused_ordering(772) 00:15:07.647 fused_ordering(773) 00:15:07.647 fused_ordering(774) 00:15:07.647 fused_ordering(775) 00:15:07.647 fused_ordering(776) 00:15:07.647 fused_ordering(777) 00:15:07.647 fused_ordering(778) 00:15:07.647 fused_ordering(779) 00:15:07.647 fused_ordering(780) 00:15:07.647 fused_ordering(781) 00:15:07.647 fused_ordering(782) 00:15:07.647 fused_ordering(783) 00:15:07.647 fused_ordering(784) 00:15:07.647 fused_ordering(785) 00:15:07.647 fused_ordering(786) 00:15:07.647 fused_ordering(787) 00:15:07.647 fused_ordering(788) 00:15:07.647 fused_ordering(789) 00:15:07.647 fused_ordering(790) 00:15:07.647 fused_ordering(791) 00:15:07.647 fused_ordering(792) 00:15:07.647 fused_ordering(793) 00:15:07.647 fused_ordering(794) 00:15:07.647 fused_ordering(795) 00:15:07.647 fused_ordering(796) 00:15:07.647 fused_ordering(797) 00:15:07.647 fused_ordering(798) 00:15:07.647 fused_ordering(799) 00:15:07.647 fused_ordering(800) 00:15:07.647 fused_ordering(801) 00:15:07.648 fused_ordering(802) 00:15:07.648 fused_ordering(803) 00:15:07.648 fused_ordering(804) 00:15:07.648 fused_ordering(805) 00:15:07.648 fused_ordering(806) 00:15:07.648 fused_ordering(807) 00:15:07.648 fused_ordering(808) 00:15:07.648 fused_ordering(809) 00:15:07.648 fused_ordering(810) 00:15:07.648 fused_ordering(811) 00:15:07.648 fused_ordering(812) 00:15:07.648 fused_ordering(813) 00:15:07.648 fused_ordering(814) 00:15:07.648 fused_ordering(815) 00:15:07.648 fused_ordering(816) 00:15:07.648 fused_ordering(817) 00:15:07.648 fused_ordering(818) 00:15:07.648 fused_ordering(819) 00:15:07.648 fused_ordering(820) 00:15:08.213 fused_ordering(821) 00:15:08.213 fused_ordering(822) 00:15:08.213 fused_ordering(823) 00:15:08.213 fused_ordering(824) 00:15:08.213 fused_ordering(825) 00:15:08.213 fused_ordering(826) 00:15:08.213 fused_ordering(827) 00:15:08.213 fused_ordering(828) 00:15:08.213 fused_ordering(829) 00:15:08.213 fused_ordering(830) 00:15:08.213 fused_ordering(831) 00:15:08.214 fused_ordering(832) 00:15:08.214 fused_ordering(833) 00:15:08.214 fused_ordering(834) 00:15:08.214 fused_ordering(835) 00:15:08.214 fused_ordering(836) 00:15:08.214 fused_ordering(837) 00:15:08.214 fused_ordering(838) 00:15:08.214 fused_ordering(839) 00:15:08.214 fused_ordering(840) 00:15:08.214 fused_ordering(841) 00:15:08.214 fused_ordering(842) 00:15:08.214 fused_ordering(843) 00:15:08.214 fused_ordering(844) 00:15:08.214 fused_ordering(845) 00:15:08.214 fused_ordering(846) 00:15:08.214 fused_ordering(847) 00:15:08.214 fused_ordering(848) 00:15:08.214 fused_ordering(849) 00:15:08.214 fused_ordering(850) 00:15:08.214 fused_ordering(851) 00:15:08.214 fused_ordering(852) 00:15:08.214 fused_ordering(853) 00:15:08.214 fused_ordering(854) 00:15:08.214 fused_ordering(855) 00:15:08.214 fused_ordering(856) 00:15:08.214 fused_ordering(857) 00:15:08.214 fused_ordering(858) 00:15:08.214 fused_ordering(859) 00:15:08.214 fused_ordering(860) 00:15:08.214 fused_ordering(861) 00:15:08.214 fused_ordering(862) 00:15:08.214 fused_ordering(863) 00:15:08.214 fused_ordering(864) 00:15:08.214 fused_ordering(865) 00:15:08.214 fused_ordering(866) 00:15:08.214 fused_ordering(867) 00:15:08.214 fused_ordering(868) 00:15:08.214 fused_ordering(869) 00:15:08.214 fused_ordering(870) 00:15:08.214 fused_ordering(871) 00:15:08.214 fused_ordering(872) 00:15:08.214 fused_ordering(873) 00:15:08.214 fused_ordering(874) 00:15:08.214 fused_ordering(875) 00:15:08.214 fused_ordering(876) 00:15:08.214 fused_ordering(877) 00:15:08.214 fused_ordering(878) 00:15:08.214 fused_ordering(879) 00:15:08.214 fused_ordering(880) 00:15:08.214 fused_ordering(881) 00:15:08.214 fused_ordering(882) 00:15:08.214 fused_ordering(883) 00:15:08.214 fused_ordering(884) 00:15:08.214 fused_ordering(885) 00:15:08.214 fused_ordering(886) 00:15:08.214 fused_ordering(887) 00:15:08.214 fused_ordering(888) 00:15:08.214 fused_ordering(889) 00:15:08.214 fused_ordering(890) 00:15:08.214 fused_ordering(891) 00:15:08.214 fused_ordering(892) 00:15:08.214 fused_ordering(893) 00:15:08.214 fused_ordering(894) 00:15:08.214 fused_ordering(895) 00:15:08.214 fused_ordering(896) 00:15:08.214 fused_ordering(897) 00:15:08.214 fused_ordering(898) 00:15:08.214 fused_ordering(899) 00:15:08.214 fused_ordering(900) 00:15:08.214 fused_ordering(901) 00:15:08.214 fused_ordering(902) 00:15:08.214 fused_ordering(903) 00:15:08.214 fused_ordering(904) 00:15:08.214 fused_ordering(905) 00:15:08.214 fused_ordering(906) 00:15:08.214 fused_ordering(907) 00:15:08.214 fused_ordering(908) 00:15:08.214 fused_ordering(909) 00:15:08.214 fused_ordering(910) 00:15:08.214 fused_ordering(911) 00:15:08.214 fused_ordering(912) 00:15:08.214 fused_ordering(913) 00:15:08.214 fused_ordering(914) 00:15:08.214 fused_ordering(915) 00:15:08.214 fused_ordering(916) 00:15:08.214 fused_ordering(917) 00:15:08.214 fused_ordering(918) 00:15:08.214 fused_ordering(919) 00:15:08.214 fused_ordering(920) 00:15:08.214 fused_ordering(921) 00:15:08.214 fused_ordering(922) 00:15:08.214 fused_ordering(923) 00:15:08.214 fused_ordering(924) 00:15:08.214 fused_ordering(925) 00:15:08.214 fused_ordering(926) 00:15:08.214 fused_ordering(927) 00:15:08.214 fused_ordering(928) 00:15:08.214 fused_ordering(929) 00:15:08.214 fused_ordering(930) 00:15:08.214 fused_ordering(931) 00:15:08.214 fused_ordering(932) 00:15:08.214 fused_ordering(933) 00:15:08.214 fused_ordering(934) 00:15:08.214 fused_ordering(935) 00:15:08.214 fused_ordering(936) 00:15:08.214 fused_ordering(937) 00:15:08.214 fused_ordering(938) 00:15:08.214 fused_ordering(939) 00:15:08.214 fused_ordering(940) 00:15:08.214 fused_ordering(941) 00:15:08.214 fused_ordering(942) 00:15:08.214 fused_ordering(943) 00:15:08.214 fused_ordering(944) 00:15:08.214 fused_ordering(945) 00:15:08.214 fused_ordering(946) 00:15:08.214 fused_ordering(947) 00:15:08.214 fused_ordering(948) 00:15:08.214 fused_ordering(949) 00:15:08.214 fused_ordering(950) 00:15:08.214 fused_ordering(951) 00:15:08.214 fused_ordering(952) 00:15:08.214 fused_ordering(953) 00:15:08.214 fused_ordering(954) 00:15:08.214 fused_ordering(955) 00:15:08.214 fused_ordering(956) 00:15:08.214 fused_ordering(957) 00:15:08.214 fused_ordering(958) 00:15:08.214 fused_ordering(959) 00:15:08.214 fused_ordering(960) 00:15:08.214 fused_ordering(961) 00:15:08.214 fused_ordering(962) 00:15:08.214 fused_ordering(963) 00:15:08.214 fused_ordering(964) 00:15:08.214 fused_ordering(965) 00:15:08.214 fused_ordering(966) 00:15:08.214 fused_ordering(967) 00:15:08.214 fused_ordering(968) 00:15:08.214 fused_ordering(969) 00:15:08.214 fused_ordering(970) 00:15:08.214 fused_ordering(971) 00:15:08.214 fused_ordering(972) 00:15:08.214 fused_ordering(973) 00:15:08.214 fused_ordering(974) 00:15:08.214 fused_ordering(975) 00:15:08.214 fused_ordering(976) 00:15:08.214 fused_ordering(977) 00:15:08.214 fused_ordering(978) 00:15:08.214 fused_ordering(979) 00:15:08.214 fused_ordering(980) 00:15:08.214 fused_ordering(981) 00:15:08.214 fused_ordering(982) 00:15:08.214 fused_ordering(983) 00:15:08.214 fused_ordering(984) 00:15:08.214 fused_ordering(985) 00:15:08.214 fused_ordering(986) 00:15:08.214 fused_ordering(987) 00:15:08.214 fused_ordering(988) 00:15:08.214 fused_ordering(989) 00:15:08.214 fused_ordering(990) 00:15:08.214 fused_ordering(991) 00:15:08.214 fused_ordering(992) 00:15:08.214 fused_ordering(993) 00:15:08.214 fused_ordering(994) 00:15:08.214 fused_ordering(995) 00:15:08.214 fused_ordering(996) 00:15:08.214 fused_ordering(997) 00:15:08.214 fused_ordering(998) 00:15:08.214 fused_ordering(999) 00:15:08.214 fused_ordering(1000) 00:15:08.214 fused_ordering(1001) 00:15:08.214 fused_ordering(1002) 00:15:08.214 fused_ordering(1003) 00:15:08.214 fused_ordering(1004) 00:15:08.214 fused_ordering(1005) 00:15:08.214 fused_ordering(1006) 00:15:08.214 fused_ordering(1007) 00:15:08.214 fused_ordering(1008) 00:15:08.214 fused_ordering(1009) 00:15:08.214 fused_ordering(1010) 00:15:08.214 fused_ordering(1011) 00:15:08.214 fused_ordering(1012) 00:15:08.214 fused_ordering(1013) 00:15:08.214 fused_ordering(1014) 00:15:08.214 fused_ordering(1015) 00:15:08.214 fused_ordering(1016) 00:15:08.214 fused_ordering(1017) 00:15:08.214 fused_ordering(1018) 00:15:08.214 fused_ordering(1019) 00:15:08.214 fused_ordering(1020) 00:15:08.214 fused_ordering(1021) 00:15:08.214 fused_ordering(1022) 00:15:08.214 fused_ordering(1023) 00:15:08.214 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:08.214 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:08.214 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.214 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.473 rmmod nvme_tcp 00:15:08.473 rmmod nvme_fabrics 00:15:08.473 rmmod nvme_keyring 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4090716 ']' 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4090716 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 4090716 ']' 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 4090716 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4090716 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4090716' 00:15:08.473 killing process with pid 4090716 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 4090716 00:15:08.473 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 4090716 00:15:08.731 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.732 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.732 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.732 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.732 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.732 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.732 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.732 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.637 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.637 00:15:10.637 real 0m12.291s 00:15:10.637 user 0m7.198s 00:15:10.637 sys 0m6.410s 00:15:10.637 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.637 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:10.637 ************************************ 00:15:10.637 END TEST nvmf_fused_ordering 00:15:10.637 ************************************ 00:15:10.897 12:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:10.897 12:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:10.897 12:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.897 12:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:10.897 ************************************ 00:15:10.897 START TEST nvmf_ns_masking 00:15:10.897 ************************************ 00:15:10.897 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:10.897 * Looking for test storage... 00:15:10.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d8bb9c40-feaf-4e17-97ae-5e69c5b765c6 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f85dea4e-e904-40b4-9d11-a99be61046f4 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c155cb2f-88c6-483e-ac45-82c149c6a517 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.897 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:17.467 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:17.467 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:17.467 Found net devices under 0000:af:00.0: cvl_0_0 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:17.467 Found net devices under 0000:af:00.1: cvl_0_1 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.467 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:15:17.468 00:15:17.468 --- 10.0.0.2 ping statistics --- 00:15:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.468 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:15:17.468 00:15:17.468 --- 10.0.0.1 ping statistics --- 00:15:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.468 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.468 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4095076 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4095076 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 4095076 ']' 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.468 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 [2024-07-25 12:01:54.094723] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:17.468 [2024-07-25 12:01:54.094779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.468 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.468 [2024-07-25 12:01:54.182129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.468 [2024-07-25 12:01:54.272543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.468 [2024-07-25 12:01:54.272585] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.468 [2024-07-25 12:01:54.272596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.468 [2024-07-25 12:01:54.272610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.468 [2024-07-25 12:01:54.272619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.468 [2024-07-25 12:01:54.272647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:18.036 [2024-07-25 12:01:55.298199] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:18.036 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:18.295 Malloc1 00:15:18.554 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:18.554 Malloc2 00:15:18.812 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:19.072 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:19.072 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.331 [2024-07-25 12:01:56.587585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.331 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:19.331 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c155cb2f-88c6-483e-ac45-82c149c6a517 -a 10.0.0.2 -s 4420 -i 4 00:15:19.590 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.590 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:19.590 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.590 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:19.590 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:21.494 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:21.495 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:21.495 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.495 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:21.495 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.495 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:21.495 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:21.495 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.753 [ 0]:0x1 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=124e3dd475a44b79a1e64a34ca274b5d 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 124e3dd475a44b79a1e64a34ca274b5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.753 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:22.011 [ 0]:0x1 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=124e3dd475a44b79a1e64a34ca274b5d 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 124e3dd475a44b79a1e64a34ca274b5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:22.011 [ 1]:0x2 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a738d0ff63ed47f7bacc1d8907fd9128 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a738d0ff63ed47f7bacc1d8907fd9128 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:22.011 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.270 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.529 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:22.787 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:22.787 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c155cb2f-88c6-483e-ac45-82c149c6a517 -a 10.0.0.2 -s 4420 -i 4 00:15:23.046 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:23.046 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:23.046 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.046 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:23.046 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:23.046 12:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.960 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:25.218 [ 0]:0x2 00:15:25.218 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.218 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.218 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a738d0ff63ed47f7bacc1d8907fd9128 00:15:25.218 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a738d0ff63ed47f7bacc1d8907fd9128 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.218 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:25.785 [ 0]:0x1 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=124e3dd475a44b79a1e64a34ca274b5d 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 124e3dd475a44b79a1e64a34ca274b5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:25.785 [ 1]:0x2 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a738d0ff63ed47f7bacc1d8907fd9128 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a738d0ff63ed47f7bacc1d8907fd9128 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.785 12:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:26.044 [ 0]:0x2 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a738d0ff63ed47f7bacc1d8907fd9128 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a738d0ff63ed47f7bacc1d8907fd9128 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.044 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:26.303 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:26.303 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c155cb2f-88c6-483e-ac45-82c149c6a517 -a 10.0.0.2 -s 4420 -i 4 00:15:26.561 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:26.561 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:26.561 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.561 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:26.561 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:26.561 12:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:29.096 [ 0]:0x1 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.096 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=124e3dd475a44b79a1e64a34ca274b5d 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 124e3dd475a44b79a1e64a34ca274b5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:29.096 [ 1]:0x2 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a738d0ff63ed47f7bacc1d8907fd9128 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a738d0ff63ed47f7bacc1d8907fd9128 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:29.096 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:29.356 [ 0]:0x2 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a738d0ff63ed47f7bacc1d8907fd9128 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a738d0ff63ed47f7bacc1d8907fd9128 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:29.356 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:29.616 [2024-07-25 12:02:06.658377] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:29.616 request: 00:15:29.616 { 00:15:29.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.616 "nsid": 2, 00:15:29.616 "host": "nqn.2016-06.io.spdk:host1", 00:15:29.616 "method": "nvmf_ns_remove_host", 00:15:29.616 "req_id": 1 00:15:29.616 } 00:15:29.616 Got JSON-RPC error response 00:15:29.616 response: 00:15:29.616 { 00:15:29.616 "code": -32602, 00:15:29.616 "message": "Invalid parameters" 00:15:29.616 } 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:29.616 [ 0]:0x2 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a738d0ff63ed47f7bacc1d8907fd9128 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a738d0ff63ed47f7bacc1d8907fd9128 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:29.616 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4097535 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4097535 /var/tmp/host.sock 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 4097535 ']' 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:29.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.875 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:29.875 [2024-07-25 12:02:07.064577] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:29.875 [2024-07-25 12:02:07.064649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097535 ] 00:15:29.875 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.875 [2024-07-25 12:02:07.147401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.135 [2024-07-25 12:02:07.249062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.703 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.703 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:30.703 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.962 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:31.221 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d8bb9c40-feaf-4e17-97ae-5e69c5b765c6 00:15:31.221 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:31.221 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D8BB9C40FEAF4E1797AE5E69C5B765C6 -i 00:15:31.479 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f85dea4e-e904-40b4-9d11-a99be61046f4 00:15:31.479 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:31.479 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F85DEA4EE90440B49D11A99BE61046F4 -i 00:15:31.737 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:31.996 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:32.254 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:32.254 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:32.512 nvme0n1 00:15:32.512 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:32.512 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:33.079 nvme1n2 00:15:33.079 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:33.079 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:33.079 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:33.079 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:33.079 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:33.338 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:33.338 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:33.338 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:33.338 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:33.597 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d8bb9c40-feaf-4e17-97ae-5e69c5b765c6 == \d\8\b\b\9\c\4\0\-\f\e\a\f\-\4\e\1\7\-\9\7\a\e\-\5\e\6\9\c\5\b\7\6\5\c\6 ]] 00:15:33.597 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:33.597 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:33.597 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:33.856 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f85dea4e-e904-40b4-9d11-a99be61046f4 == \f\8\5\d\e\a\4\e\-\e\9\0\4\-\4\0\b\4\-\9\d\1\1\-\a\9\9\b\e\6\1\0\4\6\f\4 ]] 00:15:33.856 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4097535 00:15:33.856 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 4097535 ']' 00:15:33.856 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 4097535 00:15:33.856 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:33.856 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.856 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4097535 00:15:33.856 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:33.856 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:33.856 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4097535' 00:15:33.856 killing process with pid 4097535 00:15:33.856 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 4097535 00:15:33.856 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 4097535 00:15:34.115 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.374 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.374 rmmod nvme_tcp 00:15:34.633 rmmod nvme_fabrics 00:15:34.633 rmmod nvme_keyring 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4095076 ']' 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4095076 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 4095076 ']' 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 4095076 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4095076 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4095076' 00:15:34.633 killing process with pid 4095076 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 4095076 00:15:34.633 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 4095076 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.892 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.795 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.795 00:15:36.795 real 0m26.091s 00:15:36.795 user 0m30.316s 00:15:36.795 sys 0m7.035s 00:15:36.795 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.795 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.795 ************************************ 00:15:36.795 END TEST nvmf_ns_masking 00:15:36.795 ************************************ 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.054 ************************************ 00:15:37.054 START TEST nvmf_nvme_cli 00:15:37.054 ************************************ 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:37.054 * Looking for test storage... 00:15:37.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.054 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.055 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.685 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:43.686 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:43.686 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:43.686 Found net devices under 0000:af:00.0: cvl_0_0 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:43.686 Found net devices under 0000:af:00.1: cvl_0_1 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.686 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:15:43.686 00:15:43.686 --- 10.0.0.2 ping statistics --- 00:15:43.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.686 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:15:43.686 00:15:43.686 --- 10.0.0.1 ping statistics --- 00:15:43.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.686 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.686 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4101950 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4101950 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 4101950 ']' 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.687 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.687 [2024-07-25 12:02:20.194114] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:43.687 [2024-07-25 12:02:20.194169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.687 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.687 [2024-07-25 12:02:20.279037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.687 [2024-07-25 12:02:20.376779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.687 [2024-07-25 12:02:20.376824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.687 [2024-07-25 12:02:20.376834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.687 [2024-07-25 12:02:20.376843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.687 [2024-07-25 12:02:20.376850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.687 [2024-07-25 12:02:20.376904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.687 [2024-07-25 12:02:20.377015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.687 [2024-07-25 12:02:20.377126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.687 [2024-07-25 12:02:20.377126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.946 [2024-07-25 12:02:21.192375] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.946 Malloc0 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.946 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:44.205 Malloc1 00:15:44.205 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.205 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:44.205 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.205 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:44.205 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.205 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:44.206 [2024-07-25 12:02:21.283254] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:44.206 00:15:44.206 Discovery Log Number of Records 2, Generation counter 2 00:15:44.206 =====Discovery Log Entry 0====== 00:15:44.206 trtype: tcp 00:15:44.206 adrfam: ipv4 00:15:44.206 subtype: current discovery subsystem 00:15:44.206 treq: not required 00:15:44.206 portid: 0 00:15:44.206 trsvcid: 4420 00:15:44.206 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:44.206 traddr: 10.0.0.2 00:15:44.206 eflags: explicit discovery connections, duplicate discovery information 00:15:44.206 sectype: none 00:15:44.206 =====Discovery Log Entry 1====== 00:15:44.206 trtype: tcp 00:15:44.206 adrfam: ipv4 00:15:44.206 subtype: nvme subsystem 00:15:44.206 treq: not required 00:15:44.206 portid: 0 00:15:44.206 trsvcid: 4420 00:15:44.206 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:44.206 traddr: 10.0.0.2 00:15:44.206 eflags: none 00:15:44.206 sectype: none 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:44.206 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:45.583 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:45.583 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:45.583 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.583 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:45.583 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:45.583 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:48.116 /dev/nvme0n1 ]] 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:48.116 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:48.117 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.117 rmmod nvme_tcp 00:15:48.117 rmmod nvme_fabrics 00:15:48.117 rmmod nvme_keyring 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4101950 ']' 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4101950 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 4101950 ']' 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 4101950 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4101950 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4101950' 00:15:48.117 killing process with pid 4101950 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 4101950 00:15:48.117 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 4101950 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.376 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.280 00:15:50.280 real 0m13.340s 00:15:50.280 user 0m21.636s 00:15:50.280 sys 0m5.171s 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:50.280 ************************************ 00:15:50.280 END TEST nvmf_nvme_cli 00:15:50.280 ************************************ 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.280 ************************************ 00:15:50.280 START TEST nvmf_vfio_user 00:15:50.280 ************************************ 00:15:50.280 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:50.539 * Looking for test storage... 00:15:50.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.539 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4103485 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4103485' 00:15:50.540 Process pid: 4103485 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4103485 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 4103485 ']' 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.540 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:50.540 [2024-07-25 12:02:27.745674] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:50.540 [2024-07-25 12:02:27.745739] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.540 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.540 [2024-07-25 12:02:27.828915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.800 [2024-07-25 12:02:27.924990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.800 [2024-07-25 12:02:27.925033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.800 [2024-07-25 12:02:27.925044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.800 [2024-07-25 12:02:27.925054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.800 [2024-07-25 12:02:27.925062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.800 [2024-07-25 12:02:27.925109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.800 [2024-07-25 12:02:27.925222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.800 [2024-07-25 12:02:27.925335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.800 [2024-07-25 12:02:27.925335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.367 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.367 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:51.367 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:52.744 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:52.744 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:52.744 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:52.744 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:52.744 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:52.744 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:53.003 Malloc1 00:15:53.003 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:53.261 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:53.519 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:53.519 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:53.519 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:53.519 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:53.778 Malloc2 00:15:53.778 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:54.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:54.295 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:54.555 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:54.555 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:54.555 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:54.555 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:54.555 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:54.555 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:54.555 [2024-07-25 12:02:31.678016] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:54.555 [2024-07-25 12:02:31.678051] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104152 ] 00:15:54.555 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.555 [2024-07-25 12:02:31.714109] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:54.555 [2024-07-25 12:02:31.717567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:54.555 [2024-07-25 12:02:31.717592] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3a7e8e3000 00:15:54.555 [2024-07-25 12:02:31.718566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.719574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.720580] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.721601] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.722599] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.723611] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.724619] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.725628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:54.555 [2024-07-25 12:02:31.726640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:54.555 [2024-07-25 12:02:31.726653] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3a7e8d8000 00:15:54.555 [2024-07-25 12:02:31.728061] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:54.555 [2024-07-25 12:02:31.748286] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:54.555 [2024-07-25 12:02:31.748318] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:54.555 [2024-07-25 12:02:31.752883] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:54.555 [2024-07-25 12:02:31.752937] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:54.555 [2024-07-25 12:02:31.753038] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:54.555 [2024-07-25 12:02:31.753058] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:54.556 [2024-07-25 12:02:31.753066] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:54.556 [2024-07-25 12:02:31.753879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:54.556 [2024-07-25 12:02:31.753895] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:54.556 [2024-07-25 12:02:31.753905] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:54.556 [2024-07-25 12:02:31.754884] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:54.556 [2024-07-25 12:02:31.754894] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:54.556 [2024-07-25 12:02:31.754907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:54.556 [2024-07-25 12:02:31.755895] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:54.556 [2024-07-25 12:02:31.755906] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:54.556 [2024-07-25 12:02:31.756901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:54.556 [2024-07-25 12:02:31.756912] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:54.556 [2024-07-25 12:02:31.756919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:54.556 [2024-07-25 12:02:31.756927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:54.556 [2024-07-25 12:02:31.757035] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:54.556 [2024-07-25 12:02:31.757041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:54.556 [2024-07-25 12:02:31.757048] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:54.556 [2024-07-25 12:02:31.757908] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:54.556 [2024-07-25 12:02:31.758925] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:54.556 [2024-07-25 12:02:31.759932] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:54.556 [2024-07-25 12:02:31.760937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.556 [2024-07-25 12:02:31.761075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:54.556 [2024-07-25 12:02:31.761956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:54.556 [2024-07-25 12:02:31.761967] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:54.556 [2024-07-25 12:02:31.761973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.761999] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:54.556 [2024-07-25 12:02:31.762015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762033] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:54.556 [2024-07-25 12:02:31.762039] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:54.556 [2024-07-25 12:02:31.762045] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:54.556 [2024-07-25 12:02:31.762060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:54.556 [2024-07-25 12:02:31.762124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:54.556 [2024-07-25 12:02:31.762140] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:54.556 [2024-07-25 12:02:31.762146] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:54.556 [2024-07-25 12:02:31.762152] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:54.556 [2024-07-25 12:02:31.762158] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:54.556 [2024-07-25 12:02:31.762165] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:54.556 [2024-07-25 12:02:31.762171] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:54.556 [2024-07-25 12:02:31.762178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:54.556 [2024-07-25 12:02:31.762222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:54.556 [2024-07-25 12:02:31.762241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.556 [2024-07-25 12:02:31.762252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.556 [2024-07-25 12:02:31.762262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.556 [2024-07-25 12:02:31.762273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.556 [2024-07-25 12:02:31.762279] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:54.556 [2024-07-25 12:02:31.762316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:54.556 [2024-07-25 12:02:31.762324] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:54.556 [2024-07-25 12:02:31.762331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:54.556 [2024-07-25 12:02:31.762375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:54.556 [2024-07-25 12:02:31.762449] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762461] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762471] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:54.556 [2024-07-25 12:02:31.762477] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:54.556 [2024-07-25 12:02:31.762482] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:54.556 [2024-07-25 12:02:31.762490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:54.556 [2024-07-25 12:02:31.762511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:54.556 [2024-07-25 12:02:31.762523] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:54.556 [2024-07-25 12:02:31.762534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762553] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:54.556 [2024-07-25 12:02:31.762559] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:54.556 [2024-07-25 12:02:31.762563] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:54.556 [2024-07-25 12:02:31.762571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:54.556 [2024-07-25 12:02:31.762613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:54.556 [2024-07-25 12:02:31.762630] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762649] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:54.556 [2024-07-25 12:02:31.762655] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:54.556 [2024-07-25 12:02:31.762659] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:54.556 [2024-07-25 12:02:31.762667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:54.556 [2024-07-25 12:02:31.762689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:54.556 [2024-07-25 12:02:31.762699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:54.556 [2024-07-25 12:02:31.762708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:54.557 [2024-07-25 12:02:31.762717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:54.557 [2024-07-25 12:02:31.762728] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:54.557 [2024-07-25 12:02:31.762734] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:54.557 [2024-07-25 12:02:31.762743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:54.557 [2024-07-25 12:02:31.762750] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:54.557 [2024-07-25 12:02:31.762756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:54.557 [2024-07-25 12:02:31.762762] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:54.557 [2024-07-25 12:02:31.762784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:54.557 [2024-07-25 12:02:31.762802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:54.557 [2024-07-25 12:02:31.762816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:54.557 [2024-07-25 12:02:31.762831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:54.557 [2024-07-25 12:02:31.762845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:54.557 [2024-07-25 12:02:31.762868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:54.557 [2024-07-25 12:02:31.762882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:54.557 [2024-07-25 12:02:31.762897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:54.557 [2024-07-25 12:02:31.762914] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:54.557 [2024-07-25 12:02:31.762920] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:54.557 [2024-07-25 12:02:31.762925] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:54.557 [2024-07-25 12:02:31.762929] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:54.557 [2024-07-25 12:02:31.762934] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:54.557 [2024-07-25 12:02:31.762941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:54.557 [2024-07-25 12:02:31.762951] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:54.557 [2024-07-25 12:02:31.762956] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:54.557 [2024-07-25 12:02:31.762961] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:54.557 [2024-07-25 12:02:31.762969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:54.557 [2024-07-25 12:02:31.762977] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:54.557 [2024-07-25 12:02:31.762983] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:54.557 [2024-07-25 12:02:31.762987] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:54.557 [2024-07-25 12:02:31.762995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:54.557 [2024-07-25 12:02:31.763004] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:54.557 [2024-07-25 12:02:31.763009] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:54.557 [2024-07-25 12:02:31.763016] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:54.557 [2024-07-25 12:02:31.763023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:54.557 [2024-07-25 12:02:31.763032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:54.557 [2024-07-25 12:02:31.763047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:54.557 [2024-07-25 12:02:31.763063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:54.557 [2024-07-25 12:02:31.763072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:54.557 ===================================================== 00:15:54.557 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:54.557 ===================================================== 00:15:54.557 Controller Capabilities/Features 00:15:54.557 ================================ 00:15:54.557 Vendor ID: 4e58 00:15:54.557 Subsystem Vendor ID: 4e58 00:15:54.557 Serial Number: SPDK1 00:15:54.557 Model Number: SPDK bdev Controller 00:15:54.557 Firmware Version: 24.09 00:15:54.557 Recommended Arb Burst: 6 00:15:54.557 IEEE OUI Identifier: 8d 6b 50 00:15:54.557 Multi-path I/O 00:15:54.557 May have multiple subsystem ports: Yes 00:15:54.557 May have multiple controllers: Yes 00:15:54.557 Associated with SR-IOV VF: No 00:15:54.557 Max Data Transfer Size: 131072 00:15:54.557 Max Number of Namespaces: 32 00:15:54.557 Max Number of I/O Queues: 127 00:15:54.557 NVMe Specification Version (VS): 1.3 00:15:54.557 NVMe Specification Version (Identify): 1.3 00:15:54.557 Maximum Queue Entries: 256 00:15:54.557 Contiguous Queues Required: Yes 00:15:54.557 Arbitration Mechanisms Supported 00:15:54.557 Weighted Round Robin: Not Supported 00:15:54.557 Vendor Specific: Not Supported 00:15:54.557 Reset Timeout: 15000 ms 00:15:54.557 Doorbell Stride: 4 bytes 00:15:54.557 NVM Subsystem Reset: Not Supported 00:15:54.557 Command Sets Supported 00:15:54.557 NVM Command Set: Supported 00:15:54.557 Boot Partition: Not Supported 00:15:54.557 Memory Page Size Minimum: 4096 bytes 00:15:54.557 Memory Page Size Maximum: 4096 bytes 00:15:54.557 Persistent Memory Region: Not Supported 00:15:54.557 Optional Asynchronous Events Supported 00:15:54.557 Namespace Attribute Notices: Supported 00:15:54.557 Firmware Activation Notices: Not Supported 00:15:54.557 ANA Change Notices: Not Supported 00:15:54.557 PLE Aggregate Log Change Notices: Not Supported 00:15:54.557 LBA Status Info Alert Notices: Not Supported 00:15:54.557 EGE Aggregate Log Change Notices: Not Supported 00:15:54.557 Normal NVM Subsystem Shutdown event: Not Supported 00:15:54.557 Zone Descriptor Change Notices: Not Supported 00:15:54.557 Discovery Log Change Notices: Not Supported 00:15:54.557 Controller Attributes 00:15:54.557 128-bit Host Identifier: Supported 00:15:54.557 Non-Operational Permissive Mode: Not Supported 00:15:54.557 NVM Sets: Not Supported 00:15:54.557 Read Recovery Levels: Not Supported 00:15:54.557 Endurance Groups: Not Supported 00:15:54.557 Predictable Latency Mode: Not Supported 00:15:54.557 Traffic Based Keep ALive: Not Supported 00:15:54.557 Namespace Granularity: Not Supported 00:15:54.557 SQ Associations: Not Supported 00:15:54.557 UUID List: Not Supported 00:15:54.557 Multi-Domain Subsystem: Not Supported 00:15:54.557 Fixed Capacity Management: Not Supported 00:15:54.557 Variable Capacity Management: Not Supported 00:15:54.557 Delete Endurance Group: Not Supported 00:15:54.557 Delete NVM Set: Not Supported 00:15:54.557 Extended LBA Formats Supported: Not Supported 00:15:54.557 Flexible Data Placement Supported: Not Supported 00:15:54.557 00:15:54.557 Controller Memory Buffer Support 00:15:54.557 ================================ 00:15:54.557 Supported: No 00:15:54.557 00:15:54.557 Persistent Memory Region Support 00:15:54.557 ================================ 00:15:54.557 Supported: No 00:15:54.557 00:15:54.557 Admin Command Set Attributes 00:15:54.557 ============================ 00:15:54.557 Security Send/Receive: Not Supported 00:15:54.557 Format NVM: Not Supported 00:15:54.557 Firmware Activate/Download: Not Supported 00:15:54.557 Namespace Management: Not Supported 00:15:54.557 Device Self-Test: Not Supported 00:15:54.557 Directives: Not Supported 00:15:54.557 NVMe-MI: Not Supported 00:15:54.557 Virtualization Management: Not Supported 00:15:54.557 Doorbell Buffer Config: Not Supported 00:15:54.557 Get LBA Status Capability: Not Supported 00:15:54.557 Command & Feature Lockdown Capability: Not Supported 00:15:54.557 Abort Command Limit: 4 00:15:54.557 Async Event Request Limit: 4 00:15:54.557 Number of Firmware Slots: N/A 00:15:54.557 Firmware Slot 1 Read-Only: N/A 00:15:54.557 Firmware Activation Without Reset: N/A 00:15:54.557 Multiple Update Detection Support: N/A 00:15:54.557 Firmware Update Granularity: No Information Provided 00:15:54.557 Per-Namespace SMART Log: No 00:15:54.557 Asymmetric Namespace Access Log Page: Not Supported 00:15:54.557 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:54.557 Command Effects Log Page: Supported 00:15:54.557 Get Log Page Extended Data: Supported 00:15:54.557 Telemetry Log Pages: Not Supported 00:15:54.557 Persistent Event Log Pages: Not Supported 00:15:54.557 Supported Log Pages Log Page: May Support 00:15:54.558 Commands Supported & Effects Log Page: Not Supported 00:15:54.558 Feature Identifiers & Effects Log Page:May Support 00:15:54.558 NVMe-MI Commands & Effects Log Page: May Support 00:15:54.558 Data Area 4 for Telemetry Log: Not Supported 00:15:54.558 Error Log Page Entries Supported: 128 00:15:54.558 Keep Alive: Supported 00:15:54.558 Keep Alive Granularity: 10000 ms 00:15:54.558 00:15:54.558 NVM Command Set Attributes 00:15:54.558 ========================== 00:15:54.558 Submission Queue Entry Size 00:15:54.558 Max: 64 00:15:54.558 Min: 64 00:15:54.558 Completion Queue Entry Size 00:15:54.558 Max: 16 00:15:54.558 Min: 16 00:15:54.558 Number of Namespaces: 32 00:15:54.558 Compare Command: Supported 00:15:54.558 Write Uncorrectable Command: Not Supported 00:15:54.558 Dataset Management Command: Supported 00:15:54.558 Write Zeroes Command: Supported 00:15:54.558 Set Features Save Field: Not Supported 00:15:54.558 Reservations: Not Supported 00:15:54.558 Timestamp: Not Supported 00:15:54.558 Copy: Supported 00:15:54.558 Volatile Write Cache: Present 00:15:54.558 Atomic Write Unit (Normal): 1 00:15:54.558 Atomic Write Unit (PFail): 1 00:15:54.558 Atomic Compare & Write Unit: 1 00:15:54.558 Fused Compare & Write: Supported 00:15:54.558 Scatter-Gather List 00:15:54.558 SGL Command Set: Supported (Dword aligned) 00:15:54.558 SGL Keyed: Not Supported 00:15:54.558 SGL Bit Bucket Descriptor: Not Supported 00:15:54.558 SGL Metadata Pointer: Not Supported 00:15:54.558 Oversized SGL: Not Supported 00:15:54.558 SGL Metadata Address: Not Supported 00:15:54.558 SGL Offset: Not Supported 00:15:54.558 Transport SGL Data Block: Not Supported 00:15:54.558 Replay Protected Memory Block: Not Supported 00:15:54.558 00:15:54.558 Firmware Slot Information 00:15:54.558 ========================= 00:15:54.558 Active slot: 1 00:15:54.558 Slot 1 Firmware Revision: 24.09 00:15:54.558 00:15:54.558 00:15:54.558 Commands Supported and Effects 00:15:54.558 ============================== 00:15:54.558 Admin Commands 00:15:54.558 -------------- 00:15:54.558 Get Log Page (02h): Supported 00:15:54.558 Identify (06h): Supported 00:15:54.558 Abort (08h): Supported 00:15:54.558 Set Features (09h): Supported 00:15:54.558 Get Features (0Ah): Supported 00:15:54.558 Asynchronous Event Request (0Ch): Supported 00:15:54.558 Keep Alive (18h): Supported 00:15:54.558 I/O Commands 00:15:54.558 ------------ 00:15:54.558 Flush (00h): Supported LBA-Change 00:15:54.558 Write (01h): Supported LBA-Change 00:15:54.558 Read (02h): Supported 00:15:54.558 Compare (05h): Supported 00:15:54.558 Write Zeroes (08h): Supported LBA-Change 00:15:54.558 Dataset Management (09h): Supported LBA-Change 00:15:54.558 Copy (19h): Supported LBA-Change 00:15:54.558 00:15:54.558 Error Log 00:15:54.558 ========= 00:15:54.558 00:15:54.558 Arbitration 00:15:54.558 =========== 00:15:54.558 Arbitration Burst: 1 00:15:54.558 00:15:54.558 Power Management 00:15:54.558 ================ 00:15:54.558 Number of Power States: 1 00:15:54.558 Current Power State: Power State #0 00:15:54.558 Power State #0: 00:15:54.558 Max Power: 0.00 W 00:15:54.558 Non-Operational State: Operational 00:15:54.558 Entry Latency: Not Reported 00:15:54.558 Exit Latency: Not Reported 00:15:54.558 Relative Read Throughput: 0 00:15:54.558 Relative Read Latency: 0 00:15:54.558 Relative Write Throughput: 0 00:15:54.558 Relative Write Latency: 0 00:15:54.558 Idle Power: Not Reported 00:15:54.558 Active Power: Not Reported 00:15:54.558 Non-Operational Permissive Mode: Not Supported 00:15:54.558 00:15:54.558 Health Information 00:15:54.558 ================== 00:15:54.558 Critical Warnings: 00:15:54.558 Available Spare Space: OK 00:15:54.558 Temperature: OK 00:15:54.558 Device Reliability: OK 00:15:54.558 Read Only: No 00:15:54.558 Volatile Memory Backup: OK 00:15:54.558 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:54.558 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:54.558 Available Spare: 0% 00:15:54.558 Available Sp[2024-07-25 12:02:31.763193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:54.558 [2024-07-25 12:02:31.763209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:54.558 [2024-07-25 12:02:31.763242] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:54.558 [2024-07-25 12:02:31.763254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.558 [2024-07-25 12:02:31.763263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.558 [2024-07-25 12:02:31.763271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.558 [2024-07-25 12:02:31.763279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.558 [2024-07-25 12:02:31.763977] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:54.558 [2024-07-25 12:02:31.763992] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:54.558 [2024-07-25 12:02:31.764985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.558 [2024-07-25 12:02:31.767619] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:54.558 [2024-07-25 12:02:31.767629] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:54.558 [2024-07-25 12:02:31.768010] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:54.558 [2024-07-25 12:02:31.768024] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:54.558 [2024-07-25 12:02:31.768082] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:54.558 [2024-07-25 12:02:31.770054] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:54.558 are Threshold: 0% 00:15:54.558 Life Percentage Used: 0% 00:15:54.558 Data Units Read: 0 00:15:54.558 Data Units Written: 0 00:15:54.558 Host Read Commands: 0 00:15:54.558 Host Write Commands: 0 00:15:54.558 Controller Busy Time: 0 minutes 00:15:54.558 Power Cycles: 0 00:15:54.558 Power On Hours: 0 hours 00:15:54.558 Unsafe Shutdowns: 0 00:15:54.558 Unrecoverable Media Errors: 0 00:15:54.558 Lifetime Error Log Entries: 0 00:15:54.558 Warning Temperature Time: 0 minutes 00:15:54.558 Critical Temperature Time: 0 minutes 00:15:54.558 00:15:54.558 Number of Queues 00:15:54.558 ================ 00:15:54.558 Number of I/O Submission Queues: 127 00:15:54.558 Number of I/O Completion Queues: 127 00:15:54.558 00:15:54.558 Active Namespaces 00:15:54.558 ================= 00:15:54.558 Namespace ID:1 00:15:54.558 Error Recovery Timeout: Unlimited 00:15:54.558 Command Set Identifier: NVM (00h) 00:15:54.558 Deallocate: Supported 00:15:54.558 Deallocated/Unwritten Error: Not Supported 00:15:54.558 Deallocated Read Value: Unknown 00:15:54.558 Deallocate in Write Zeroes: Not Supported 00:15:54.558 Deallocated Guard Field: 0xFFFF 00:15:54.558 Flush: Supported 00:15:54.558 Reservation: Supported 00:15:54.558 Namespace Sharing Capabilities: Multiple Controllers 00:15:54.558 Size (in LBAs): 131072 (0GiB) 00:15:54.558 Capacity (in LBAs): 131072 (0GiB) 00:15:54.558 Utilization (in LBAs): 131072 (0GiB) 00:15:54.558 NGUID: 4C86CC0CB3214EBFB50D9CA6E7B0E5EE 00:15:54.558 UUID: 4c86cc0c-b321-4ebf-b50d-9ca6e7b0e5ee 00:15:54.558 Thin Provisioning: Not Supported 00:15:54.558 Per-NS Atomic Units: Yes 00:15:54.558 Atomic Boundary Size (Normal): 0 00:15:54.558 Atomic Boundary Size (PFail): 0 00:15:54.558 Atomic Boundary Offset: 0 00:15:54.558 Maximum Single Source Range Length: 65535 00:15:54.558 Maximum Copy Length: 65535 00:15:54.558 Maximum Source Range Count: 1 00:15:54.558 NGUID/EUI64 Never Reused: No 00:15:54.558 Namespace Write Protected: No 00:15:54.558 Number of LBA Formats: 1 00:15:54.558 Current LBA Format: LBA Format #00 00:15:54.558 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:54.558 00:15:54.558 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:54.558 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.817 [2024-07-25 12:02:32.012830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:00.101 Initializing NVMe Controllers 00:16:00.101 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:00.101 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:00.101 Initialization complete. Launching workers. 00:16:00.101 ======================================================== 00:16:00.101 Latency(us) 00:16:00.101 Device Information : IOPS MiB/s Average min max 00:16:00.101 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 18637.47 72.80 6868.89 2704.60 13622.22 00:16:00.101 ======================================================== 00:16:00.101 Total : 18637.47 72.80 6868.89 2704.60 13622.22 00:16:00.101 00:16:00.101 [2024-07-25 12:02:37.037872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:00.101 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:00.101 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.101 [2024-07-25 12:02:37.317670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:05.379 Initializing NVMe Controllers 00:16:05.379 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:05.379 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:05.379 Initialization complete. Launching workers. 00:16:05.379 ======================================================== 00:16:05.379 Latency(us) 00:16:05.379 Device Information : IOPS MiB/s Average min max 00:16:05.379 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15262.13 59.62 8386.45 7318.18 15960.81 00:16:05.379 ======================================================== 00:16:05.379 Total : 15262.13 59.62 8386.45 7318.18 15960.81 00:16:05.379 00:16:05.379 [2024-07-25 12:02:42.363573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:05.379 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:05.379 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.379 [2024-07-25 12:02:42.656396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:10.688 [2024-07-25 12:02:47.739179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:10.688 Initializing NVMe Controllers 00:16:10.688 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:10.688 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:10.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:10.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:10.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:10.688 Initialization complete. Launching workers. 00:16:10.688 Starting thread on core 2 00:16:10.688 Starting thread on core 3 00:16:10.688 Starting thread on core 1 00:16:10.688 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:10.688 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.947 [2024-07-25 12:02:48.085436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:14.232 [2024-07-25 12:02:51.157383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:14.232 Initializing NVMe Controllers 00:16:14.232 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.232 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.232 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:14.232 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:14.232 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:14.232 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:14.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:14.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:14.232 Initialization complete. Launching workers. 00:16:14.232 Starting thread on core 1 with urgent priority queue 00:16:14.232 Starting thread on core 2 with urgent priority queue 00:16:14.232 Starting thread on core 3 with urgent priority queue 00:16:14.232 Starting thread on core 0 with urgent priority queue 00:16:14.232 SPDK bdev Controller (SPDK1 ) core 0: 6553.00 IO/s 15.26 secs/100000 ios 00:16:14.232 SPDK bdev Controller (SPDK1 ) core 1: 4235.33 IO/s 23.61 secs/100000 ios 00:16:14.232 SPDK bdev Controller (SPDK1 ) core 2: 7234.33 IO/s 13.82 secs/100000 ios 00:16:14.232 SPDK bdev Controller (SPDK1 ) core 3: 4903.00 IO/s 20.40 secs/100000 ios 00:16:14.232 ======================================================== 00:16:14.232 00:16:14.232 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:14.232 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.232 [2024-07-25 12:02:51.485434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:14.232 Initializing NVMe Controllers 00:16:14.232 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.232 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.232 Namespace ID: 1 size: 0GB 00:16:14.232 Initialization complete. 00:16:14.232 INFO: using host memory buffer for IO 00:16:14.232 Hello world! 00:16:14.232 [2024-07-25 12:02:51.518879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:14.491 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:14.491 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.749 [2024-07-25 12:02:51.821208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:15.684 Initializing NVMe Controllers 00:16:15.684 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:15.684 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:15.684 Initialization complete. Launching workers. 00:16:15.684 submit (in ns) avg, min, max = 8295.1, 4560.0, 4001733.6 00:16:15.684 complete (in ns) avg, min, max = 54610.3, 2758.2, 4000929.1 00:16:15.684 00:16:15.684 Submit histogram 00:16:15.684 ================ 00:16:15.684 Range in us Cumulative Count 00:16:15.684 4.538 - 4.567: 0.0437% ( 3) 00:16:15.684 4.567 - 4.596: 1.1799% ( 78) 00:16:15.684 4.596 - 4.625: 3.8456% ( 183) 00:16:15.684 4.625 - 4.655: 7.1377% ( 226) 00:16:15.684 4.655 - 4.684: 13.0371% ( 405) 00:16:15.684 4.684 - 4.713: 28.8420% ( 1085) 00:16:15.684 4.713 - 4.742: 41.1071% ( 842) 00:16:15.684 4.742 - 4.771: 51.9301% ( 743) 00:16:15.684 4.771 - 4.800: 63.5106% ( 795) 00:16:15.684 4.800 - 4.829: 72.3816% ( 609) 00:16:15.684 4.829 - 4.858: 80.9614% ( 589) 00:16:15.684 4.858 - 4.887: 84.4283% ( 238) 00:16:15.684 4.887 - 4.916: 86.0743% ( 113) 00:16:15.684 4.916 - 4.945: 87.4727% ( 96) 00:16:15.684 4.945 - 4.975: 89.3955% ( 132) 00:16:15.684 4.975 - 5.004: 91.4057% ( 138) 00:16:15.684 5.004 - 5.033: 93.4596% ( 141) 00:16:15.684 5.033 - 5.062: 95.3387% ( 129) 00:16:15.684 5.062 - 5.091: 96.7953% ( 100) 00:16:15.684 5.091 - 5.120: 97.8587% ( 73) 00:16:15.684 5.120 - 5.149: 98.5288% ( 46) 00:16:15.684 5.149 - 5.178: 99.0532% ( 36) 00:16:15.684 5.178 - 5.207: 99.2717% ( 15) 00:16:15.684 5.207 - 5.236: 99.3445% ( 5) 00:16:15.684 5.236 - 5.265: 99.4465% ( 7) 00:16:15.684 5.265 - 5.295: 99.5193% ( 5) 00:16:15.684 7.505 - 7.564: 99.5339% ( 1) 00:16:15.684 7.564 - 7.622: 99.5630% ( 2) 00:16:15.684 7.622 - 7.680: 99.5921% ( 2) 00:16:15.684 7.680 - 7.738: 99.6067% ( 1) 00:16:15.684 7.796 - 7.855: 99.6213% ( 1) 00:16:15.684 7.913 - 7.971: 99.6358% ( 1) 00:16:15.684 7.971 - 8.029: 99.6504% ( 1) 00:16:15.684 8.145 - 8.204: 99.6650% ( 1) 00:16:15.684 8.320 - 8.378: 99.6795% ( 1) 00:16:15.684 8.378 - 8.436: 99.7087% ( 2) 00:16:15.684 8.436 - 8.495: 99.7232% ( 1) 00:16:15.684 8.495 - 8.553: 99.7378% ( 1) 00:16:15.684 8.553 - 8.611: 99.7669% ( 2) 00:16:15.684 8.785 - 8.844: 99.7815% ( 1) 00:16:15.684 9.018 - 9.076: 99.7961% ( 1) 00:16:15.684 9.076 - 9.135: 99.8106% ( 1) 00:16:15.684 9.193 - 9.251: 99.8252% ( 1) 00:16:15.684 9.309 - 9.367: 99.8398% ( 1) 00:16:15.684 9.949 - 10.007: 99.8543% ( 1) 00:16:15.684 10.240 - 10.298: 99.8689% ( 1) 00:16:15.684 10.473 - 10.531: 99.8835% ( 1) 00:16:15.684 10.705 - 10.764: 99.8980% ( 1) 00:16:15.684 10.764 - 10.822: 99.9126% ( 1) 00:16:15.684 3991.738 - 4021.527: 100.0000% ( 6) 00:16:15.684 00:16:15.684 Complete histogram 00:16:15.684 ================== 00:16:15.684 Range in us Cumulative Count 00:16:15.684 2.749 - 2.764: 0.1311% ( 9) 00:16:15.684 2.764 - 2.778: 5.6664% ( 380) 00:16:15.684 2.778 - 2.793: 33.1245% ( 1885) 00:16:15.684 2.793 - 2.807: 68.9585% ( 2460) 00:16:15.684 2.807 - 2.822: 83.9476% ( 1029) 00:16:15.684 2.822 - 2.836: 89.0896% ( 353) 00:16:15.684 2.836 - 2.851: 93.3139% ( 290) 00:16:15.684 2.851 - 2.865: 96.2272% ( 200) 00:16:15.684 2.865 - 2.880: 97.4654% ( 85) 00:16:15.684 2.880 - 2.895: 97.8150% ( 24) 00:16:15.684 2.895 - 2.909: 98.0626% ( 17) 00:16:15.684 2.909 - 2.924: 98.2666% ( 14) 00:16:15.684 2.924 - 2.938: 98.3394% ( 5) 00:16:15.684 2.938 - 2.953: 98.3977% ( 4) 00:16:15.684 2.953 - 2.967: 98.4414% ( 3) 00:16:15.684 2.967 - 2.982: 98.4559% ( 1) 00:16:15.684 2.982 - 2.996: 98.4705% ( 1) 00:16:15.684 2.996 - 3.011: 98.4851% ( 1) 00:16:15.684 3.040 - 3.055: 98.4996% ( 1) 00:16:15.684 3.055 - 3.069: 98.5142% ( 1) 00:16:15.684 3.084 - 3.098: 98.5288% ( 1) 00:16:15.684 3.113 - 3.127: 98.5433% ( 1) 00:16:15.684 5.207 - 5.236: 98.5579% ( 1) 00:16:15.684 5.498 - 5.527: 98.5725% ( 1) 00:16:15.684 5.964 - 5.993: 98.5870% ( 1) 00:16:15.684 6.284 - 6.313: 98.6016% ( 1) 00:16:15.684 6.429 - 6.458: 98.6162% ( 1) 00:16:15.684 6.458 - [2024-07-25 12:02:52.846515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:15.684 6.487: 98.6307% ( 1) 00:16:15.684 6.807 - 6.836: 98.6453% ( 1) 00:16:15.684 7.127 - 7.156: 98.6599% ( 1) 00:16:15.684 7.331 - 7.360: 98.6744% ( 1) 00:16:15.684 8.029 - 8.087: 98.6890% ( 1) 00:16:15.684 8.262 - 8.320: 98.7036% ( 1) 00:16:15.684 3991.738 - 4021.527: 100.0000% ( 89) 00:16:15.684 00:16:15.684 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:15.684 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:15.684 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:15.684 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:15.684 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:15.943 [ 00:16:15.943 { 00:16:15.943 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:15.943 "subtype": "Discovery", 00:16:15.943 "listen_addresses": [], 00:16:15.943 "allow_any_host": true, 00:16:15.943 "hosts": [] 00:16:15.943 }, 00:16:15.943 { 00:16:15.943 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:15.943 "subtype": "NVMe", 00:16:15.943 "listen_addresses": [ 00:16:15.943 { 00:16:15.943 "trtype": "VFIOUSER", 00:16:15.943 "adrfam": "IPv4", 00:16:15.943 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:15.943 "trsvcid": "0" 00:16:15.943 } 00:16:15.943 ], 00:16:15.943 "allow_any_host": true, 00:16:15.943 "hosts": [], 00:16:15.943 "serial_number": "SPDK1", 00:16:15.943 "model_number": "SPDK bdev Controller", 00:16:15.943 "max_namespaces": 32, 00:16:15.943 "min_cntlid": 1, 00:16:15.943 "max_cntlid": 65519, 00:16:15.943 "namespaces": [ 00:16:15.943 { 00:16:15.943 "nsid": 1, 00:16:15.943 "bdev_name": "Malloc1", 00:16:15.943 "name": "Malloc1", 00:16:15.943 "nguid": "4C86CC0CB3214EBFB50D9CA6E7B0E5EE", 00:16:15.943 "uuid": "4c86cc0c-b321-4ebf-b50d-9ca6e7b0e5ee" 00:16:15.943 } 00:16:15.943 ] 00:16:15.943 }, 00:16:15.943 { 00:16:15.943 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:15.943 "subtype": "NVMe", 00:16:15.943 "listen_addresses": [ 00:16:15.943 { 00:16:15.943 "trtype": "VFIOUSER", 00:16:15.943 "adrfam": "IPv4", 00:16:15.943 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:15.943 "trsvcid": "0" 00:16:15.943 } 00:16:15.943 ], 00:16:15.943 "allow_any_host": true, 00:16:15.943 "hosts": [], 00:16:15.943 "serial_number": "SPDK2", 00:16:15.943 "model_number": "SPDK bdev Controller", 00:16:15.943 "max_namespaces": 32, 00:16:15.943 "min_cntlid": 1, 00:16:15.943 "max_cntlid": 65519, 00:16:15.943 "namespaces": [ 00:16:15.943 { 00:16:15.943 "nsid": 1, 00:16:15.943 "bdev_name": "Malloc2", 00:16:15.943 "name": "Malloc2", 00:16:15.943 "nguid": "71597F867B1F4608A3345E4A0A244A45", 00:16:15.943 "uuid": "71597f86-7b1f-4608-a334-5e4a0a244a45" 00:16:15.943 } 00:16:15.943 ] 00:16:15.943 } 00:16:15.943 ] 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4108085 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:15.943 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:15.943 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.202 [2024-07-25 12:02:53.334598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:16.202 Malloc3 00:16:16.202 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:16.460 [2024-07-25 12:02:53.610769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:16.460 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:16.460 Asynchronous Event Request test 00:16:16.460 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:16.460 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:16.460 Registering asynchronous event callbacks... 00:16:16.460 Starting namespace attribute notice tests for all controllers... 00:16:16.460 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:16.460 aer_cb - Changed Namespace 00:16:16.460 Cleaning up... 00:16:16.720 [ 00:16:16.720 { 00:16:16.720 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:16.720 "subtype": "Discovery", 00:16:16.720 "listen_addresses": [], 00:16:16.720 "allow_any_host": true, 00:16:16.720 "hosts": [] 00:16:16.720 }, 00:16:16.720 { 00:16:16.720 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:16.720 "subtype": "NVMe", 00:16:16.720 "listen_addresses": [ 00:16:16.720 { 00:16:16.720 "trtype": "VFIOUSER", 00:16:16.720 "adrfam": "IPv4", 00:16:16.720 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:16.720 "trsvcid": "0" 00:16:16.720 } 00:16:16.720 ], 00:16:16.720 "allow_any_host": true, 00:16:16.720 "hosts": [], 00:16:16.720 "serial_number": "SPDK1", 00:16:16.720 "model_number": "SPDK bdev Controller", 00:16:16.720 "max_namespaces": 32, 00:16:16.720 "min_cntlid": 1, 00:16:16.720 "max_cntlid": 65519, 00:16:16.720 "namespaces": [ 00:16:16.720 { 00:16:16.720 "nsid": 1, 00:16:16.720 "bdev_name": "Malloc1", 00:16:16.720 "name": "Malloc1", 00:16:16.720 "nguid": "4C86CC0CB3214EBFB50D9CA6E7B0E5EE", 00:16:16.720 "uuid": "4c86cc0c-b321-4ebf-b50d-9ca6e7b0e5ee" 00:16:16.720 }, 00:16:16.720 { 00:16:16.720 "nsid": 2, 00:16:16.720 "bdev_name": "Malloc3", 00:16:16.720 "name": "Malloc3", 00:16:16.720 "nguid": "16FAE6E22BF14982B9B1E89576A7B554", 00:16:16.720 "uuid": "16fae6e2-2bf1-4982-b9b1-e89576a7b554" 00:16:16.720 } 00:16:16.720 ] 00:16:16.720 }, 00:16:16.720 { 00:16:16.720 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:16.720 "subtype": "NVMe", 00:16:16.720 "listen_addresses": [ 00:16:16.720 { 00:16:16.720 "trtype": "VFIOUSER", 00:16:16.720 "adrfam": "IPv4", 00:16:16.720 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:16.720 "trsvcid": "0" 00:16:16.720 } 00:16:16.720 ], 00:16:16.720 "allow_any_host": true, 00:16:16.720 "hosts": [], 00:16:16.720 "serial_number": "SPDK2", 00:16:16.720 "model_number": "SPDK bdev Controller", 00:16:16.720 "max_namespaces": 32, 00:16:16.720 "min_cntlid": 1, 00:16:16.720 "max_cntlid": 65519, 00:16:16.720 "namespaces": [ 00:16:16.720 { 00:16:16.720 "nsid": 1, 00:16:16.720 "bdev_name": "Malloc2", 00:16:16.720 "name": "Malloc2", 00:16:16.720 "nguid": "71597F867B1F4608A3345E4A0A244A45", 00:16:16.720 "uuid": "71597f86-7b1f-4608-a334-5e4a0a244a45" 00:16:16.720 } 00:16:16.720 ] 00:16:16.720 } 00:16:16.720 ] 00:16:16.720 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4108085 00:16:16.720 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:16.720 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:16.720 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:16.720 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:16.720 [2024-07-25 12:02:53.840641] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:16:16.720 [2024-07-25 12:02:53.840667] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108095 ] 00:16:16.720 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.720 [2024-07-25 12:02:53.875954] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:16.720 [2024-07-25 12:02:53.883896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:16.720 [2024-07-25 12:02:53.883922] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f17212db000 00:16:16.720 [2024-07-25 12:02:53.884892] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.885902] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.886910] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.887927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.888946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.889951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.890964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.891963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:16.720 [2024-07-25 12:02:53.892979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:16.720 [2024-07-25 12:02:53.892993] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f17212d0000 00:16:16.720 [2024-07-25 12:02:53.894403] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:16.720 [2024-07-25 12:02:53.912165] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:16.720 [2024-07-25 12:02:53.912198] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:16.720 [2024-07-25 12:02:53.917307] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:16.720 [2024-07-25 12:02:53.917360] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:16.720 [2024-07-25 12:02:53.917461] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:16.720 [2024-07-25 12:02:53.917480] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:16.720 [2024-07-25 12:02:53.917487] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:16.720 [2024-07-25 12:02:53.918315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:16.720 [2024-07-25 12:02:53.918334] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:16.720 [2024-07-25 12:02:53.918344] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:16.720 [2024-07-25 12:02:53.919317] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:16.720 [2024-07-25 12:02:53.919330] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:16.720 [2024-07-25 12:02:53.919340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:16.720 [2024-07-25 12:02:53.920329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:16.720 [2024-07-25 12:02:53.920342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:16.720 [2024-07-25 12:02:53.921345] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:16.721 [2024-07-25 12:02:53.921358] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:16.721 [2024-07-25 12:02:53.921365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:16.721 [2024-07-25 12:02:53.921374] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:16.721 [2024-07-25 12:02:53.921481] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:16.721 [2024-07-25 12:02:53.921487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:16.721 [2024-07-25 12:02:53.921494] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:16.721 [2024-07-25 12:02:53.922349] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:16.721 [2024-07-25 12:02:53.923355] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:16.721 [2024-07-25 12:02:53.924369] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:16.721 [2024-07-25 12:02:53.925377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:16.721 [2024-07-25 12:02:53.925429] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:16.721 [2024-07-25 12:02:53.926389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:16.721 [2024-07-25 12:02:53.926401] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:16.721 [2024-07-25 12:02:53.926408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.926434] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:16.721 [2024-07-25 12:02:53.926444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.926458] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:16.721 [2024-07-25 12:02:53.926467] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:16.721 [2024-07-25 12:02:53.926472] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:16.721 [2024-07-25 12:02:53.926487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:16.721 [2024-07-25 12:02:53.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:16.721 [2024-07-25 12:02:53.932632] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:16.721 [2024-07-25 12:02:53.932639] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:16.721 [2024-07-25 12:02:53.932645] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:16.721 [2024-07-25 12:02:53.932651] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:16.721 [2024-07-25 12:02:53.932657] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:16.721 [2024-07-25 12:02:53.932663] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:16.721 [2024-07-25 12:02:53.932671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.932680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.932697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:16.721 [2024-07-25 12:02:53.941610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:16.721 [2024-07-25 12:02:53.941631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.721 [2024-07-25 12:02:53.941642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.721 [2024-07-25 12:02:53.941654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.721 [2024-07-25 12:02:53.941664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.721 [2024-07-25 12:02:53.941670] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.941681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.941693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:16.721 [2024-07-25 12:02:53.949609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:16.721 [2024-07-25 12:02:53.949620] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:16.721 [2024-07-25 12:02:53.949627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.949638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.949649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.949662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:16.721 [2024-07-25 12:02:53.957610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:16.721 [2024-07-25 12:02:53.957691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.957704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.957714] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:16.721 [2024-07-25 12:02:53.957721] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:16.721 [2024-07-25 12:02:53.957725] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:16.721 [2024-07-25 12:02:53.957734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:16.721 [2024-07-25 12:02:53.965616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:16.721 [2024-07-25 12:02:53.965632] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:16.721 [2024-07-25 12:02:53.965647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.965657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.965667] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:16.721 [2024-07-25 12:02:53.965673] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:16.721 [2024-07-25 12:02:53.965677] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:16.721 [2024-07-25 12:02:53.965685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:16.721 [2024-07-25 12:02:53.973612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:16.721 [2024-07-25 12:02:53.973634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.973645] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.973655] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:16.721 [2024-07-25 12:02:53.973661] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:16.721 [2024-07-25 12:02:53.973666] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:16.721 [2024-07-25 12:02:53.973674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:16.721 [2024-07-25 12:02:53.981613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:16.721 [2024-07-25 12:02:53.981628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.981637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.981650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.981659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.981666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.981673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.981679] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:16.721 [2024-07-25 12:02:53.981685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:16.721 [2024-07-25 12:02:53.981692] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:16.721 [2024-07-25 12:02:53.981711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:16.722 [2024-07-25 12:02:53.989612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:16.722 [2024-07-25 12:02:53.989632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:16.722 [2024-07-25 12:02:53.997613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:16.722 [2024-07-25 12:02:53.997630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:16.722 [2024-07-25 12:02:54.005611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:16.722 [2024-07-25 12:02:54.005629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:16.722 [2024-07-25 12:02:54.013612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:16.722 [2024-07-25 12:02:54.013635] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:16.722 [2024-07-25 12:02:54.013642] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:16.722 [2024-07-25 12:02:54.013646] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:16.722 [2024-07-25 12:02:54.013650] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:16.722 [2024-07-25 12:02:54.013655] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:16.722 [2024-07-25 12:02:54.013663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:16.722 [2024-07-25 12:02:54.013672] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:16.722 [2024-07-25 12:02:54.013678] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:16.722 [2024-07-25 12:02:54.013682] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:16.722 [2024-07-25 12:02:54.013690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:16.722 [2024-07-25 12:02:54.013699] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:16.722 [2024-07-25 12:02:54.013705] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:16.722 [2024-07-25 12:02:54.013712] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:16.722 [2024-07-25 12:02:54.013719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:16.722 [2024-07-25 12:02:54.013729] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:16.722 [2024-07-25 12:02:54.013735] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:16.722 [2024-07-25 12:02:54.013739] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:16.722 [2024-07-25 12:02:54.013747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:16.980 [2024-07-25 12:02:54.021613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:16.980 [2024-07-25 12:02:54.021634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:16.980 [2024-07-25 12:02:54.021650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:16.980 [2024-07-25 12:02:54.021659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:16.980 ===================================================== 00:16:16.980 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:16.980 ===================================================== 00:16:16.980 Controller Capabilities/Features 00:16:16.980 ================================ 00:16:16.980 Vendor ID: 4e58 00:16:16.980 Subsystem Vendor ID: 4e58 00:16:16.980 Serial Number: SPDK2 00:16:16.980 Model Number: SPDK bdev Controller 00:16:16.980 Firmware Version: 24.09 00:16:16.980 Recommended Arb Burst: 6 00:16:16.980 IEEE OUI Identifier: 8d 6b 50 00:16:16.980 Multi-path I/O 00:16:16.980 May have multiple subsystem ports: Yes 00:16:16.980 May have multiple controllers: Yes 00:16:16.980 Associated with SR-IOV VF: No 00:16:16.980 Max Data Transfer Size: 131072 00:16:16.980 Max Number of Namespaces: 32 00:16:16.980 Max Number of I/O Queues: 127 00:16:16.980 NVMe Specification Version (VS): 1.3 00:16:16.980 NVMe Specification Version (Identify): 1.3 00:16:16.980 Maximum Queue Entries: 256 00:16:16.980 Contiguous Queues Required: Yes 00:16:16.980 Arbitration Mechanisms Supported 00:16:16.980 Weighted Round Robin: Not Supported 00:16:16.980 Vendor Specific: Not Supported 00:16:16.980 Reset Timeout: 15000 ms 00:16:16.980 Doorbell Stride: 4 bytes 00:16:16.980 NVM Subsystem Reset: Not Supported 00:16:16.980 Command Sets Supported 00:16:16.980 NVM Command Set: Supported 00:16:16.980 Boot Partition: Not Supported 00:16:16.980 Memory Page Size Minimum: 4096 bytes 00:16:16.980 Memory Page Size Maximum: 4096 bytes 00:16:16.980 Persistent Memory Region: Not Supported 00:16:16.980 Optional Asynchronous Events Supported 00:16:16.980 Namespace Attribute Notices: Supported 00:16:16.980 Firmware Activation Notices: Not Supported 00:16:16.980 ANA Change Notices: Not Supported 00:16:16.980 PLE Aggregate Log Change Notices: Not Supported 00:16:16.980 LBA Status Info Alert Notices: Not Supported 00:16:16.980 EGE Aggregate Log Change Notices: Not Supported 00:16:16.980 Normal NVM Subsystem Shutdown event: Not Supported 00:16:16.980 Zone Descriptor Change Notices: Not Supported 00:16:16.980 Discovery Log Change Notices: Not Supported 00:16:16.980 Controller Attributes 00:16:16.980 128-bit Host Identifier: Supported 00:16:16.980 Non-Operational Permissive Mode: Not Supported 00:16:16.980 NVM Sets: Not Supported 00:16:16.980 Read Recovery Levels: Not Supported 00:16:16.980 Endurance Groups: Not Supported 00:16:16.980 Predictable Latency Mode: Not Supported 00:16:16.980 Traffic Based Keep ALive: Not Supported 00:16:16.980 Namespace Granularity: Not Supported 00:16:16.980 SQ Associations: Not Supported 00:16:16.980 UUID List: Not Supported 00:16:16.980 Multi-Domain Subsystem: Not Supported 00:16:16.980 Fixed Capacity Management: Not Supported 00:16:16.980 Variable Capacity Management: Not Supported 00:16:16.980 Delete Endurance Group: Not Supported 00:16:16.980 Delete NVM Set: Not Supported 00:16:16.980 Extended LBA Formats Supported: Not Supported 00:16:16.980 Flexible Data Placement Supported: Not Supported 00:16:16.980 00:16:16.980 Controller Memory Buffer Support 00:16:16.980 ================================ 00:16:16.980 Supported: No 00:16:16.980 00:16:16.980 Persistent Memory Region Support 00:16:16.980 ================================ 00:16:16.980 Supported: No 00:16:16.981 00:16:16.981 Admin Command Set Attributes 00:16:16.981 ============================ 00:16:16.981 Security Send/Receive: Not Supported 00:16:16.981 Format NVM: Not Supported 00:16:16.981 Firmware Activate/Download: Not Supported 00:16:16.981 Namespace Management: Not Supported 00:16:16.981 Device Self-Test: Not Supported 00:16:16.981 Directives: Not Supported 00:16:16.981 NVMe-MI: Not Supported 00:16:16.981 Virtualization Management: Not Supported 00:16:16.981 Doorbell Buffer Config: Not Supported 00:16:16.981 Get LBA Status Capability: Not Supported 00:16:16.981 Command & Feature Lockdown Capability: Not Supported 00:16:16.981 Abort Command Limit: 4 00:16:16.981 Async Event Request Limit: 4 00:16:16.981 Number of Firmware Slots: N/A 00:16:16.981 Firmware Slot 1 Read-Only: N/A 00:16:16.981 Firmware Activation Without Reset: N/A 00:16:16.981 Multiple Update Detection Support: N/A 00:16:16.981 Firmware Update Granularity: No Information Provided 00:16:16.981 Per-Namespace SMART Log: No 00:16:16.981 Asymmetric Namespace Access Log Page: Not Supported 00:16:16.981 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:16.981 Command Effects Log Page: Supported 00:16:16.981 Get Log Page Extended Data: Supported 00:16:16.981 Telemetry Log Pages: Not Supported 00:16:16.981 Persistent Event Log Pages: Not Supported 00:16:16.981 Supported Log Pages Log Page: May Support 00:16:16.981 Commands Supported & Effects Log Page: Not Supported 00:16:16.981 Feature Identifiers & Effects Log Page:May Support 00:16:16.981 NVMe-MI Commands & Effects Log Page: May Support 00:16:16.981 Data Area 4 for Telemetry Log: Not Supported 00:16:16.981 Error Log Page Entries Supported: 128 00:16:16.981 Keep Alive: Supported 00:16:16.981 Keep Alive Granularity: 10000 ms 00:16:16.981 00:16:16.981 NVM Command Set Attributes 00:16:16.981 ========================== 00:16:16.981 Submission Queue Entry Size 00:16:16.981 Max: 64 00:16:16.981 Min: 64 00:16:16.981 Completion Queue Entry Size 00:16:16.981 Max: 16 00:16:16.981 Min: 16 00:16:16.981 Number of Namespaces: 32 00:16:16.981 Compare Command: Supported 00:16:16.981 Write Uncorrectable Command: Not Supported 00:16:16.981 Dataset Management Command: Supported 00:16:16.981 Write Zeroes Command: Supported 00:16:16.981 Set Features Save Field: Not Supported 00:16:16.981 Reservations: Not Supported 00:16:16.981 Timestamp: Not Supported 00:16:16.981 Copy: Supported 00:16:16.981 Volatile Write Cache: Present 00:16:16.981 Atomic Write Unit (Normal): 1 00:16:16.981 Atomic Write Unit (PFail): 1 00:16:16.981 Atomic Compare & Write Unit: 1 00:16:16.981 Fused Compare & Write: Supported 00:16:16.981 Scatter-Gather List 00:16:16.981 SGL Command Set: Supported (Dword aligned) 00:16:16.981 SGL Keyed: Not Supported 00:16:16.981 SGL Bit Bucket Descriptor: Not Supported 00:16:16.981 SGL Metadata Pointer: Not Supported 00:16:16.981 Oversized SGL: Not Supported 00:16:16.981 SGL Metadata Address: Not Supported 00:16:16.981 SGL Offset: Not Supported 00:16:16.981 Transport SGL Data Block: Not Supported 00:16:16.981 Replay Protected Memory Block: Not Supported 00:16:16.981 00:16:16.981 Firmware Slot Information 00:16:16.981 ========================= 00:16:16.981 Active slot: 1 00:16:16.981 Slot 1 Firmware Revision: 24.09 00:16:16.981 00:16:16.981 00:16:16.981 Commands Supported and Effects 00:16:16.981 ============================== 00:16:16.981 Admin Commands 00:16:16.981 -------------- 00:16:16.981 Get Log Page (02h): Supported 00:16:16.981 Identify (06h): Supported 00:16:16.981 Abort (08h): Supported 00:16:16.981 Set Features (09h): Supported 00:16:16.981 Get Features (0Ah): Supported 00:16:16.981 Asynchronous Event Request (0Ch): Supported 00:16:16.981 Keep Alive (18h): Supported 00:16:16.981 I/O Commands 00:16:16.981 ------------ 00:16:16.981 Flush (00h): Supported LBA-Change 00:16:16.981 Write (01h): Supported LBA-Change 00:16:16.981 Read (02h): Supported 00:16:16.981 Compare (05h): Supported 00:16:16.981 Write Zeroes (08h): Supported LBA-Change 00:16:16.981 Dataset Management (09h): Supported LBA-Change 00:16:16.981 Copy (19h): Supported LBA-Change 00:16:16.981 00:16:16.981 Error Log 00:16:16.981 ========= 00:16:16.981 00:16:16.981 Arbitration 00:16:16.981 =========== 00:16:16.981 Arbitration Burst: 1 00:16:16.981 00:16:16.981 Power Management 00:16:16.981 ================ 00:16:16.981 Number of Power States: 1 00:16:16.981 Current Power State: Power State #0 00:16:16.981 Power State #0: 00:16:16.981 Max Power: 0.00 W 00:16:16.981 Non-Operational State: Operational 00:16:16.981 Entry Latency: Not Reported 00:16:16.981 Exit Latency: Not Reported 00:16:16.981 Relative Read Throughput: 0 00:16:16.981 Relative Read Latency: 0 00:16:16.981 Relative Write Throughput: 0 00:16:16.981 Relative Write Latency: 0 00:16:16.981 Idle Power: Not Reported 00:16:16.981 Active Power: Not Reported 00:16:16.981 Non-Operational Permissive Mode: Not Supported 00:16:16.981 00:16:16.981 Health Information 00:16:16.981 ================== 00:16:16.981 Critical Warnings: 00:16:16.981 Available Spare Space: OK 00:16:16.981 Temperature: OK 00:16:16.981 Device Reliability: OK 00:16:16.981 Read Only: No 00:16:16.981 Volatile Memory Backup: OK 00:16:16.981 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:16.981 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:16.981 Available Spare: 0% 00:16:16.981 Available Sp[2024-07-25 12:02:54.021785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:16.981 [2024-07-25 12:02:54.029612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:16.981 [2024-07-25 12:02:54.029652] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:16.981 [2024-07-25 12:02:54.029664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.981 [2024-07-25 12:02:54.029674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.981 [2024-07-25 12:02:54.029683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.981 [2024-07-25 12:02:54.029691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.981 [2024-07-25 12:02:54.029766] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:16.981 [2024-07-25 12:02:54.029780] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:16.981 [2024-07-25 12:02:54.030774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:16.981 [2024-07-25 12:02:54.030836] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:16.981 [2024-07-25 12:02:54.030845] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:16.981 [2024-07-25 12:02:54.031790] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:16.981 [2024-07-25 12:02:54.031806] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:16.981 [2024-07-25 12:02:54.031862] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:16.981 [2024-07-25 12:02:54.033334] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:16.981 are Threshold: 0% 00:16:16.981 Life Percentage Used: 0% 00:16:16.981 Data Units Read: 0 00:16:16.981 Data Units Written: 0 00:16:16.981 Host Read Commands: 0 00:16:16.981 Host Write Commands: 0 00:16:16.981 Controller Busy Time: 0 minutes 00:16:16.981 Power Cycles: 0 00:16:16.981 Power On Hours: 0 hours 00:16:16.981 Unsafe Shutdowns: 0 00:16:16.981 Unrecoverable Media Errors: 0 00:16:16.981 Lifetime Error Log Entries: 0 00:16:16.981 Warning Temperature Time: 0 minutes 00:16:16.981 Critical Temperature Time: 0 minutes 00:16:16.981 00:16:16.981 Number of Queues 00:16:16.981 ================ 00:16:16.981 Number of I/O Submission Queues: 127 00:16:16.981 Number of I/O Completion Queues: 127 00:16:16.981 00:16:16.981 Active Namespaces 00:16:16.981 ================= 00:16:16.981 Namespace ID:1 00:16:16.981 Error Recovery Timeout: Unlimited 00:16:16.981 Command Set Identifier: NVM (00h) 00:16:16.981 Deallocate: Supported 00:16:16.981 Deallocated/Unwritten Error: Not Supported 00:16:16.981 Deallocated Read Value: Unknown 00:16:16.981 Deallocate in Write Zeroes: Not Supported 00:16:16.981 Deallocated Guard Field: 0xFFFF 00:16:16.981 Flush: Supported 00:16:16.982 Reservation: Supported 00:16:16.982 Namespace Sharing Capabilities: Multiple Controllers 00:16:16.982 Size (in LBAs): 131072 (0GiB) 00:16:16.982 Capacity (in LBAs): 131072 (0GiB) 00:16:16.982 Utilization (in LBAs): 131072 (0GiB) 00:16:16.982 NGUID: 71597F867B1F4608A3345E4A0A244A45 00:16:16.982 UUID: 71597f86-7b1f-4608-a334-5e4a0a244a45 00:16:16.982 Thin Provisioning: Not Supported 00:16:16.982 Per-NS Atomic Units: Yes 00:16:16.982 Atomic Boundary Size (Normal): 0 00:16:16.982 Atomic Boundary Size (PFail): 0 00:16:16.982 Atomic Boundary Offset: 0 00:16:16.982 Maximum Single Source Range Length: 65535 00:16:16.982 Maximum Copy Length: 65535 00:16:16.982 Maximum Source Range Count: 1 00:16:16.982 NGUID/EUI64 Never Reused: No 00:16:16.982 Namespace Write Protected: No 00:16:16.982 Number of LBA Formats: 1 00:16:16.982 Current LBA Format: LBA Format #00 00:16:16.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:16.982 00:16:16.982 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:16.982 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.240 [2024-07-25 12:02:54.294317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:22.508 Initializing NVMe Controllers 00:16:22.508 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:22.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:22.508 Initialization complete. Launching workers. 00:16:22.508 ======================================================== 00:16:22.508 Latency(us) 00:16:22.508 Device Information : IOPS MiB/s Average min max 00:16:22.508 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 18654.96 72.87 6868.59 2697.67 13279.73 00:16:22.508 ======================================================== 00:16:22.508 Total : 18654.96 72.87 6868.59 2697.67 13279.73 00:16:22.508 00:16:22.508 [2024-07-25 12:02:59.401914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:22.508 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:22.508 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.508 [2024-07-25 12:02:59.675042] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.781 Initializing NVMe Controllers 00:16:27.781 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:27.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:27.781 Initialization complete. Launching workers. 00:16:27.781 ======================================================== 00:16:27.781 Latency(us) 00:16:27.781 Device Information : IOPS MiB/s Average min max 00:16:27.781 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24068.00 94.02 5317.82 1574.33 11443.06 00:16:27.781 ======================================================== 00:16:27.781 Total : 24068.00 94.02 5317.82 1574.33 11443.06 00:16:27.781 00:16:27.781 [2024-07-25 12:03:04.695972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.781 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:27.781 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.781 [2024-07-25 12:03:04.982108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:33.051 [2024-07-25 12:03:10.143494] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:33.051 Initializing NVMe Controllers 00:16:33.051 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:33.051 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:33.051 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:33.051 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:33.051 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:33.051 Initialization complete. Launching workers. 00:16:33.051 Starting thread on core 2 00:16:33.051 Starting thread on core 3 00:16:33.051 Starting thread on core 1 00:16:33.051 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:33.051 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.310 [2024-07-25 12:03:10.488493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:36.597 [2024-07-25 12:03:13.561615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:36.597 Initializing NVMe Controllers 00:16:36.597 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:36.597 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:36.597 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:36.597 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:36.597 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:36.597 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:36.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:36.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:36.597 Initialization complete. Launching workers. 00:16:36.597 Starting thread on core 1 with urgent priority queue 00:16:36.597 Starting thread on core 2 with urgent priority queue 00:16:36.597 Starting thread on core 3 with urgent priority queue 00:16:36.597 Starting thread on core 0 with urgent priority queue 00:16:36.597 SPDK bdev Controller (SPDK2 ) core 0: 7129.67 IO/s 14.03 secs/100000 ios 00:16:36.597 SPDK bdev Controller (SPDK2 ) core 1: 3988.33 IO/s 25.07 secs/100000 ios 00:16:36.597 SPDK bdev Controller (SPDK2 ) core 2: 4973.33 IO/s 20.11 secs/100000 ios 00:16:36.597 SPDK bdev Controller (SPDK2 ) core 3: 7297.33 IO/s 13.70 secs/100000 ios 00:16:36.597 ======================================================== 00:16:36.597 00:16:36.597 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:36.597 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.597 [2024-07-25 12:03:13.874365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:36.597 Initializing NVMe Controllers 00:16:36.597 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:36.597 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:36.597 Namespace ID: 1 size: 0GB 00:16:36.597 Initialization complete. 00:16:36.597 INFO: using host memory buffer for IO 00:16:36.597 Hello world! 00:16:36.597 [2024-07-25 12:03:13.884108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:36.857 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:36.857 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.115 [2024-07-25 12:03:14.191120] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:38.051 Initializing NVMe Controllers 00:16:38.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:38.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:38.051 Initialization complete. Launching workers. 00:16:38.051 submit (in ns) avg, min, max = 9844.2, 4535.5, 4001620.0 00:16:38.051 complete (in ns) avg, min, max = 35449.2, 2707.3, 4006911.8 00:16:38.051 00:16:38.051 Submit histogram 00:16:38.051 ================ 00:16:38.051 Range in us Cumulative Count 00:16:38.051 4.509 - 4.538: 0.0105% ( 1) 00:16:38.051 4.538 - 4.567: 0.4839% ( 45) 00:16:38.051 4.567 - 4.596: 1.9987% ( 144) 00:16:38.051 4.596 - 4.625: 4.7654% ( 263) 00:16:38.051 4.625 - 4.655: 8.4157% ( 347) 00:16:38.051 4.655 - 4.684: 16.9682% ( 813) 00:16:38.051 4.684 - 4.713: 27.5195% ( 1003) 00:16:38.051 4.713 - 4.742: 41.1214% ( 1293) 00:16:38.051 4.742 - 4.771: 53.6924% ( 1195) 00:16:38.051 4.771 - 4.800: 62.6131% ( 848) 00:16:38.051 4.800 - 4.829: 72.8592% ( 974) 00:16:38.051 4.829 - 4.858: 80.0757% ( 686) 00:16:38.051 4.858 - 4.887: 84.7360% ( 443) 00:16:38.051 4.887 - 4.916: 86.9661% ( 212) 00:16:38.051 4.916 - 4.945: 88.2600% ( 123) 00:16:38.051 4.945 - 4.975: 89.6907% ( 136) 00:16:38.051 4.975 - 5.004: 91.4685% ( 169) 00:16:38.051 5.004 - 5.033: 93.2043% ( 165) 00:16:38.051 5.033 - 5.062: 94.9295% ( 164) 00:16:38.051 5.062 - 5.091: 96.6547% ( 164) 00:16:38.051 5.091 - 5.120: 97.8119% ( 110) 00:16:38.051 5.120 - 5.149: 98.5167% ( 67) 00:16:38.051 5.149 - 5.178: 98.9901% ( 45) 00:16:38.051 5.178 - 5.207: 99.2321% ( 23) 00:16:38.051 5.207 - 5.236: 99.4425% ( 20) 00:16:38.051 5.236 - 5.265: 99.4530% ( 1) 00:16:38.051 5.265 - 5.295: 99.5161% ( 6) 00:16:38.051 5.324 - 5.353: 99.5266% ( 1) 00:16:38.051 7.505 - 7.564: 99.5371% ( 1) 00:16:38.051 7.564 - 7.622: 99.5477% ( 1) 00:16:38.051 7.738 - 7.796: 99.5582% ( 1) 00:16:38.051 7.796 - 7.855: 99.5792% ( 2) 00:16:38.051 7.855 - 7.913: 99.5897% ( 1) 00:16:38.051 7.913 - 7.971: 99.6003% ( 1) 00:16:38.051 7.971 - 8.029: 99.6213% ( 2) 00:16:38.051 8.087 - 8.145: 99.6318% ( 1) 00:16:38.051 8.495 - 8.553: 99.6529% ( 2) 00:16:38.051 8.553 - 8.611: 99.6634% ( 1) 00:16:38.051 8.669 - 8.727: 99.6739% ( 1) 00:16:38.051 8.727 - 8.785: 99.6844% ( 1) 00:16:38.051 8.844 - 8.902: 99.6949% ( 1) 00:16:38.051 8.960 - 9.018: 99.7054% ( 1) 00:16:38.051 9.135 - 9.193: 99.7160% ( 1) 00:16:38.051 9.193 - 9.251: 99.7265% ( 1) 00:16:38.051 9.425 - 9.484: 99.7370% ( 1) 00:16:38.051 9.484 - 9.542: 99.7580% ( 2) 00:16:38.051 9.542 - 9.600: 99.7791% ( 2) 00:16:38.051 9.658 - 9.716: 99.8001% ( 2) 00:16:38.051 9.775 - 9.833: 99.8106% ( 1) 00:16:38.051 9.891 - 9.949: 99.8212% ( 1) 00:16:38.051 10.065 - 10.124: 99.8317% ( 1) 00:16:38.051 10.182 - 10.240: 99.8422% ( 1) 00:16:38.051 10.415 - 10.473: 99.8527% ( 1) 00:16:38.051 10.473 - 10.531: 99.8632% ( 1) 00:16:38.051 11.171 - 11.229: 99.8738% ( 1) 00:16:38.051 3991.738 - 4021.527: 100.0000% ( 12) 00:16:38.051 00:16:38.051 Complete histogram 00:16:38.051 ================== 00:16:38.051 Range in us Cumulative Count 00:16:38.051 2.705 - 2.720: 2.0303% ( 193) 00:16:38.051 2.720 - 2.735: 20.8079% ( 1785) 00:16:38.052 2.735 - 2.749: 55.3861% ( 3287) 00:16:38.052 2.749 - 2.764: 70.3976% ( 1427) 00:16:38.052 2.764 - 2.778: 76.8988% ( 618) 00:16:38.052 2.778 - 2.793: 86.8820% ( 949) 00:16:38.052 2.793 - 2.807: 92.1734% ( 503) 00:16:38.052 2.807 - 2.822: 94.5718% ( 228) 00:16:38.052 2.822 - 2.836: 96.5390% ( 187) 00:16:38.052 2.836 - 2.851: 97.3911% ( 81) 00:16:38.052 2.851 - 2.865: 98.0118% ( 59) 00:16:38.052 2.865 - 2.880: 98.4115% ( 38) 00:16:38.052 2.880 - 2.895: 98.6324% ( 21) 00:16:38.052 2.895 - 2.909: 98.6640% ( 3) 00:16:38.052 2.909 - 2.924: 98.7061% ( 4) 00:16:38.052 2.924 - 2.938: 98.7482% ( 4) 00:16:38.052 2.938 - 2.953: 98.7797% ( 3) 00:16:38.052 2.953 - 2.967: 98.8008% ( 2) 00:16:38.052 2.967 - 2.982: 98.8218% ( 2) 00:16:38.052 2.982 - 2.996: 98.8323% ( 1) 00:16:38.052 2.996 - 3.011: 98.8744% ( 4) 00:16:38.052 3.011 - [2024-07-25 12:03:15.295046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:38.052 3.025: 98.8954% ( 2) 00:16:38.052 3.025 - 3.040: 98.9270% ( 3) 00:16:38.052 3.040 - 3.055: 98.9375% ( 1) 00:16:38.052 3.055 - 3.069: 98.9480% ( 1) 00:16:38.052 3.084 - 3.098: 98.9586% ( 1) 00:16:38.052 3.098 - 3.113: 98.9691% ( 1) 00:16:38.052 3.127 - 3.142: 98.9796% ( 1) 00:16:38.052 3.258 - 3.273: 98.9901% ( 1) 00:16:38.052 4.800 - 4.829: 99.0006% ( 1) 00:16:38.052 5.178 - 5.207: 99.0112% ( 1) 00:16:38.052 5.469 - 5.498: 99.0217% ( 1) 00:16:38.052 5.702 - 5.731: 99.0322% ( 1) 00:16:38.052 5.760 - 5.789: 99.0427% ( 1) 00:16:38.052 5.964 - 5.993: 99.0532% ( 1) 00:16:38.052 5.993 - 6.022: 99.0637% ( 1) 00:16:38.052 6.196 - 6.225: 99.0743% ( 1) 00:16:38.052 6.371 - 6.400: 99.0848% ( 1) 00:16:38.052 6.575 - 6.604: 99.0953% ( 1) 00:16:38.052 7.011 - 7.040: 99.1058% ( 1) 00:16:38.052 7.040 - 7.069: 99.1163% ( 1) 00:16:38.052 7.098 - 7.127: 99.1269% ( 1) 00:16:38.052 7.302 - 7.331: 99.1374% ( 1) 00:16:38.052 7.680 - 7.738: 99.1479% ( 1) 00:16:38.052 8.029 - 8.087: 99.1584% ( 1) 00:16:38.052 9.135 - 9.193: 99.1689% ( 1) 00:16:38.052 9.367 - 9.425: 99.1795% ( 1) 00:16:38.052 3410.851 - 3425.745: 99.1900% ( 1) 00:16:38.052 3574.691 - 3589.585: 99.2005% ( 1) 00:16:38.052 3991.738 - 4021.527: 100.0000% ( 76) 00:16:38.052 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:38.311 [ 00:16:38.311 { 00:16:38.311 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:38.311 "subtype": "Discovery", 00:16:38.311 "listen_addresses": [], 00:16:38.311 "allow_any_host": true, 00:16:38.311 "hosts": [] 00:16:38.311 }, 00:16:38.311 { 00:16:38.311 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:38.311 "subtype": "NVMe", 00:16:38.311 "listen_addresses": [ 00:16:38.311 { 00:16:38.311 "trtype": "VFIOUSER", 00:16:38.311 "adrfam": "IPv4", 00:16:38.311 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:38.311 "trsvcid": "0" 00:16:38.311 } 00:16:38.311 ], 00:16:38.311 "allow_any_host": true, 00:16:38.311 "hosts": [], 00:16:38.311 "serial_number": "SPDK1", 00:16:38.311 "model_number": "SPDK bdev Controller", 00:16:38.311 "max_namespaces": 32, 00:16:38.311 "min_cntlid": 1, 00:16:38.311 "max_cntlid": 65519, 00:16:38.311 "namespaces": [ 00:16:38.311 { 00:16:38.311 "nsid": 1, 00:16:38.311 "bdev_name": "Malloc1", 00:16:38.311 "name": "Malloc1", 00:16:38.311 "nguid": "4C86CC0CB3214EBFB50D9CA6E7B0E5EE", 00:16:38.311 "uuid": "4c86cc0c-b321-4ebf-b50d-9ca6e7b0e5ee" 00:16:38.311 }, 00:16:38.311 { 00:16:38.311 "nsid": 2, 00:16:38.311 "bdev_name": "Malloc3", 00:16:38.311 "name": "Malloc3", 00:16:38.311 "nguid": "16FAE6E22BF14982B9B1E89576A7B554", 00:16:38.311 "uuid": "16fae6e2-2bf1-4982-b9b1-e89576a7b554" 00:16:38.311 } 00:16:38.311 ] 00:16:38.311 }, 00:16:38.311 { 00:16:38.311 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:38.311 "subtype": "NVMe", 00:16:38.311 "listen_addresses": [ 00:16:38.311 { 00:16:38.311 "trtype": "VFIOUSER", 00:16:38.311 "adrfam": "IPv4", 00:16:38.311 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:38.311 "trsvcid": "0" 00:16:38.311 } 00:16:38.311 ], 00:16:38.311 "allow_any_host": true, 00:16:38.311 "hosts": [], 00:16:38.311 "serial_number": "SPDK2", 00:16:38.311 "model_number": "SPDK bdev Controller", 00:16:38.311 "max_namespaces": 32, 00:16:38.311 "min_cntlid": 1, 00:16:38.311 "max_cntlid": 65519, 00:16:38.311 "namespaces": [ 00:16:38.311 { 00:16:38.311 "nsid": 1, 00:16:38.311 "bdev_name": "Malloc2", 00:16:38.311 "name": "Malloc2", 00:16:38.311 "nguid": "71597F867B1F4608A3345E4A0A244A45", 00:16:38.311 "uuid": "71597f86-7b1f-4608-a334-5e4a0a244a45" 00:16:38.311 } 00:16:38.311 ] 00:16:38.311 } 00:16:38.311 ] 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4112532 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:38.311 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:38.311 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.571 [2024-07-25 12:03:15.706408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:38.571 Malloc4 00:16:38.571 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:38.830 [2024-07-25 12:03:15.954312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:38.830 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:38.830 Asynchronous Event Request test 00:16:38.830 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:38.830 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:38.830 Registering asynchronous event callbacks... 00:16:38.830 Starting namespace attribute notice tests for all controllers... 00:16:38.830 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:38.830 aer_cb - Changed Namespace 00:16:38.830 Cleaning up... 00:16:39.090 [ 00:16:39.090 { 00:16:39.090 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:39.090 "subtype": "Discovery", 00:16:39.090 "listen_addresses": [], 00:16:39.090 "allow_any_host": true, 00:16:39.090 "hosts": [] 00:16:39.090 }, 00:16:39.090 { 00:16:39.090 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:39.090 "subtype": "NVMe", 00:16:39.090 "listen_addresses": [ 00:16:39.090 { 00:16:39.090 "trtype": "VFIOUSER", 00:16:39.090 "adrfam": "IPv4", 00:16:39.090 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:39.090 "trsvcid": "0" 00:16:39.090 } 00:16:39.090 ], 00:16:39.090 "allow_any_host": true, 00:16:39.090 "hosts": [], 00:16:39.090 "serial_number": "SPDK1", 00:16:39.090 "model_number": "SPDK bdev Controller", 00:16:39.090 "max_namespaces": 32, 00:16:39.090 "min_cntlid": 1, 00:16:39.090 "max_cntlid": 65519, 00:16:39.090 "namespaces": [ 00:16:39.090 { 00:16:39.090 "nsid": 1, 00:16:39.091 "bdev_name": "Malloc1", 00:16:39.091 "name": "Malloc1", 00:16:39.091 "nguid": "4C86CC0CB3214EBFB50D9CA6E7B0E5EE", 00:16:39.091 "uuid": "4c86cc0c-b321-4ebf-b50d-9ca6e7b0e5ee" 00:16:39.091 }, 00:16:39.091 { 00:16:39.091 "nsid": 2, 00:16:39.091 "bdev_name": "Malloc3", 00:16:39.091 "name": "Malloc3", 00:16:39.091 "nguid": "16FAE6E22BF14982B9B1E89576A7B554", 00:16:39.091 "uuid": "16fae6e2-2bf1-4982-b9b1-e89576a7b554" 00:16:39.091 } 00:16:39.091 ] 00:16:39.091 }, 00:16:39.091 { 00:16:39.091 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:39.091 "subtype": "NVMe", 00:16:39.091 "listen_addresses": [ 00:16:39.091 { 00:16:39.091 "trtype": "VFIOUSER", 00:16:39.091 "adrfam": "IPv4", 00:16:39.091 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:39.091 "trsvcid": "0" 00:16:39.091 } 00:16:39.091 ], 00:16:39.091 "allow_any_host": true, 00:16:39.091 "hosts": [], 00:16:39.091 "serial_number": "SPDK2", 00:16:39.091 "model_number": "SPDK bdev Controller", 00:16:39.091 "max_namespaces": 32, 00:16:39.091 "min_cntlid": 1, 00:16:39.091 "max_cntlid": 65519, 00:16:39.091 "namespaces": [ 00:16:39.091 { 00:16:39.091 "nsid": 1, 00:16:39.091 "bdev_name": "Malloc2", 00:16:39.091 "name": "Malloc2", 00:16:39.091 "nguid": "71597F867B1F4608A3345E4A0A244A45", 00:16:39.091 "uuid": "71597f86-7b1f-4608-a334-5e4a0a244a45" 00:16:39.091 }, 00:16:39.091 { 00:16:39.091 "nsid": 2, 00:16:39.091 "bdev_name": "Malloc4", 00:16:39.091 "name": "Malloc4", 00:16:39.091 "nguid": "E0EA50F26E074D2CBF2AF71E44B8C1FD", 00:16:39.091 "uuid": "e0ea50f2-6e07-4d2c-bf2a-f71e44b8c1fd" 00:16:39.091 } 00:16:39.091 ] 00:16:39.091 } 00:16:39.091 ] 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4112532 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4103485 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 4103485 ']' 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 4103485 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4103485 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4103485' 00:16:39.091 killing process with pid 4103485 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 4103485 00:16:39.091 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 4103485 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4112801 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4112801' 00:16:39.350 Process pid: 4112801 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4112801 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 4112801 ']' 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.350 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:39.350 [2024-07-25 12:03:16.570801] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:39.350 [2024-07-25 12:03:16.572059] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:16:39.350 [2024-07-25 12:03:16.572110] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.350 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.350 [2024-07-25 12:03:16.645151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.609 [2024-07-25 12:03:16.738087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.609 [2024-07-25 12:03:16.738131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.609 [2024-07-25 12:03:16.738141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.609 [2024-07-25 12:03:16.738151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.609 [2024-07-25 12:03:16.738160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.609 [2024-07-25 12:03:16.739628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.609 [2024-07-25 12:03:16.739665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.609 [2024-07-25 12:03:16.739774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.609 [2024-07-25 12:03:16.739774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.609 [2024-07-25 12:03:16.826418] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:39.609 [2024-07-25 12:03:16.826968] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:39.609 [2024-07-25 12:03:16.827209] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:39.609 [2024-07-25 12:03:16.827295] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:39.609 [2024-07-25 12:03:16.827644] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:39.609 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.609 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:39.609 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:40.987 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:40.987 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:40.987 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:40.987 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:40.987 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:40.987 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:40.987 Malloc1 00:16:40.987 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:41.246 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:41.506 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:41.506 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:41.506 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:41.506 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:41.767 Malloc2 00:16:41.767 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:42.025 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:42.025 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4112801 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 4112801 ']' 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 4112801 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4112801 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4112801' 00:16:42.593 killing process with pid 4112801 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 4112801 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 4112801 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:42.593 00:16:42.593 real 0m52.325s 00:16:42.593 user 3m26.873s 00:16:42.593 sys 0m3.833s 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.593 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:42.593 ************************************ 00:16:42.593 END TEST nvmf_vfio_user 00:16:42.593 ************************************ 00:16:42.852 12:03:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:42.852 12:03:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:42.852 12:03:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.852 12:03:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.852 ************************************ 00:16:42.852 START TEST nvmf_vfio_user_nvme_compliance 00:16:42.852 ************************************ 00:16:42.852 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:42.852 * Looking for test storage... 00:16:42.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.852 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4113397 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4113397' 00:16:42.853 Process pid: 4113397 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4113397 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 4113397 ']' 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.853 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.853 [2024-07-25 12:03:20.134398] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:16:42.853 [2024-07-25 12:03:20.134458] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.111 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.111 [2024-07-25 12:03:20.217961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.111 [2024-07-25 12:03:20.310239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.111 [2024-07-25 12:03:20.310282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.111 [2024-07-25 12:03:20.310292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.111 [2024-07-25 12:03:20.310301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.111 [2024-07-25 12:03:20.310308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.111 [2024-07-25 12:03:20.310365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.111 [2024-07-25 12:03:20.310477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.111 [2024-07-25 12:03:20.310477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.111 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.111 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:43.370 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 malloc0 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.305 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:44.305 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.564 00:16:44.564 00:16:44.564 CUnit - A unit testing framework for C - Version 2.1-3 00:16:44.564 http://cunit.sourceforge.net/ 00:16:44.564 00:16:44.564 00:16:44.564 Suite: nvme_compliance 00:16:44.564 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 12:03:21.675358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.564 [2024-07-25 12:03:21.676886] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:44.564 [2024-07-25 12:03:21.676912] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:44.564 [2024-07-25 12:03:21.676925] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:44.564 [2024-07-25 12:03:21.678382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.564 passed 00:16:44.564 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 12:03:21.777507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.564 [2024-07-25 12:03:21.782556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.564 passed 00:16:44.822 Test: admin_identify_ns ...[2024-07-25 12:03:21.887057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.822 [2024-07-25 12:03:21.947624] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:44.822 [2024-07-25 12:03:21.955620] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:44.822 [2024-07-25 12:03:21.976751] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.822 passed 00:16:44.823 Test: admin_get_features_mandatory_features ...[2024-07-25 12:03:22.075704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.823 [2024-07-25 12:03:22.078735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.081 passed 00:16:45.081 Test: admin_get_features_optional_features ...[2024-07-25 12:03:22.178748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.081 [2024-07-25 12:03:22.182806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.081 passed 00:16:45.081 Test: admin_set_features_number_of_queues ...[2024-07-25 12:03:22.280923] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.340 [2024-07-25 12:03:22.386715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.340 passed 00:16:45.340 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 12:03:22.487771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.340 [2024-07-25 12:03:22.490819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.340 passed 00:16:45.340 Test: admin_get_log_page_with_lpo ...[2024-07-25 12:03:22.586970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.600 [2024-07-25 12:03:22.655619] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:45.600 [2024-07-25 12:03:22.668701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.600 passed 00:16:45.600 Test: fabric_property_get ...[2024-07-25 12:03:22.765647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.600 [2024-07-25 12:03:22.767016] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:45.600 [2024-07-25 12:03:22.769692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.600 passed 00:16:45.600 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 12:03:22.868687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.600 [2024-07-25 12:03:22.870151] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:45.600 [2024-07-25 12:03:22.871717] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.858 passed 00:16:45.858 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 12:03:22.971878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.858 [2024-07-25 12:03:23.056616] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:45.858 [2024-07-25 12:03:23.072625] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:45.858 [2024-07-25 12:03:23.077709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:45.858 passed 00:16:46.117 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 12:03:23.178567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.117 [2024-07-25 12:03:23.180050] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:46.117 [2024-07-25 12:03:23.181614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.117 passed 00:16:46.117 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 12:03:23.280424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.117 [2024-07-25 12:03:23.359621] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:46.117 [2024-07-25 12:03:23.383616] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:46.117 [2024-07-25 12:03:23.388736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.374 passed 00:16:46.374 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 12:03:23.487655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.374 [2024-07-25 12:03:23.489165] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:46.374 [2024-07-25 12:03:23.489223] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:46.374 [2024-07-25 12:03:23.490709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.374 passed 00:16:46.374 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 12:03:23.592425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.633 [2024-07-25 12:03:23.683611] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:46.633 [2024-07-25 12:03:23.691615] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:46.633 [2024-07-25 12:03:23.699630] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:46.633 [2024-07-25 12:03:23.707613] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:46.633 [2024-07-25 12:03:23.736706] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.633 passed 00:16:46.633 Test: admin_create_io_sq_verify_pc ...[2024-07-25 12:03:23.836818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.633 [2024-07-25 12:03:23.856634] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:46.633 [2024-07-25 12:03:23.874106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.633 passed 00:16:46.891 Test: admin_create_io_qp_max_qps ...[2024-07-25 12:03:23.973121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:47.825 [2024-07-25 12:03:25.079616] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:48.391 [2024-07-25 12:03:25.467081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:48.391 passed 00:16:48.391 Test: admin_create_io_sq_shared_cq ...[2024-07-25 12:03:25.564434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:48.650 [2024-07-25 12:03:25.698616] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:48.650 [2024-07-25 12:03:25.735699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:48.650 passed 00:16:48.650 00:16:48.650 Run Summary: Type Total Ran Passed Failed Inactive 00:16:48.650 suites 1 1 n/a 0 0 00:16:48.650 tests 18 18 18 0 0 00:16:48.650 asserts 360 360 360 0 n/a 00:16:48.650 00:16:48.650 Elapsed time = 1.714 seconds 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4113397 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 4113397 ']' 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 4113397 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4113397 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4113397' 00:16:48.650 killing process with pid 4113397 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 4113397 00:16:48.650 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 4113397 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:48.942 00:16:48.942 real 0m6.130s 00:16:48.942 user 0m17.221s 00:16:48.942 sys 0m0.507s 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:48.942 ************************************ 00:16:48.942 END TEST nvmf_vfio_user_nvme_compliance 00:16:48.942 ************************************ 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:48.942 ************************************ 00:16:48.942 START TEST nvmf_vfio_user_fuzz 00:16:48.942 ************************************ 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:48.942 * Looking for test storage... 00:16:48.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.942 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.201 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4114507 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4114507' 00:16:49.202 Process pid: 4114507 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4114507 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 4114507 ']' 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.202 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.138 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.138 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:50.138 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.075 malloc0 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:51.075 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:23.153 Fuzzing completed. Shutting down the fuzz application 00:17:23.153 00:17:23.153 Dumping successful admin opcodes: 00:17:23.153 8, 9, 10, 24, 00:17:23.153 Dumping successful io opcodes: 00:17:23.153 0, 00:17:23.153 NS: 0x200003a1ef00 I/O qp, Total commands completed: 584240, total successful commands: 2249, random_seed: 3437727296 00:17:23.153 NS: 0x200003a1ef00 admin qp, Total commands completed: 143716, total successful commands: 1168, random_seed: 3220967936 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4114507 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 4114507 ']' 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 4114507 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4114507 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4114507' 00:17:23.153 killing process with pid 4114507 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 4114507 00:17:23.153 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 4114507 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:23.153 00:17:23.153 real 0m33.130s 00:17:23.153 user 0m37.519s 00:17:23.153 sys 0m25.141s 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.153 ************************************ 00:17:23.153 END TEST nvmf_vfio_user_fuzz 00:17:23.153 ************************************ 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.153 ************************************ 00:17:23.153 START TEST nvmf_auth_target 00:17:23.153 ************************************ 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:23.153 * Looking for test storage... 00:17:23.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.153 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.154 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:28.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.426 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:28.427 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:28.427 Found net devices under 0000:af:00.0: cvl_0_0 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:28.427 Found net devices under 0000:af:00.1: cvl_0_1 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:17:28.427 00:17:28.427 --- 10.0.0.2 ping statistics --- 00:17:28.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.427 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:17:28.427 00:17:28.427 --- 10.0.0.1 ping statistics --- 00:17:28.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.427 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4123652 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4123652 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 4123652 ']' 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.427 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4123683 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:29.363 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=41335d5421376880d2ee589cca05d35c74effd0bffec070f 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Zun 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 41335d5421376880d2ee589cca05d35c74effd0bffec070f 0 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 41335d5421376880d2ee589cca05d35c74effd0bffec070f 0 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=41335d5421376880d2ee589cca05d35c74effd0bffec070f 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Zun 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Zun 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Zun 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0dd7ed4dc6dfa582e7061ce6e38daf299d7336afe0acc187303da9bf0f7f108b 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.H9i 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0dd7ed4dc6dfa582e7061ce6e38daf299d7336afe0acc187303da9bf0f7f108b 3 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0dd7ed4dc6dfa582e7061ce6e38daf299d7336afe0acc187303da9bf0f7f108b 3 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0dd7ed4dc6dfa582e7061ce6e38daf299d7336afe0acc187303da9bf0f7f108b 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.H9i 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.H9i 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.H9i 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2525758c6a41b9cd79026e23779c047d 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ePA 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2525758c6a41b9cd79026e23779c047d 1 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2525758c6a41b9cd79026e23779c047d 1 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2525758c6a41b9cd79026e23779c047d 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ePA 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ePA 00:17:29.364 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ePA 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d46ae0996e137e992463cda7412c3b7769b9e874ec7fdf82 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4Vg 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d46ae0996e137e992463cda7412c3b7769b9e874ec7fdf82 2 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d46ae0996e137e992463cda7412c3b7769b9e874ec7fdf82 2 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d46ae0996e137e992463cda7412c3b7769b9e874ec7fdf82 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4Vg 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4Vg 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.4Vg 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=743c65d6d489855238d68c9f20549b0504013341b6bd8964 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OWH 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 743c65d6d489855238d68c9f20549b0504013341b6bd8964 2 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 743c65d6d489855238d68c9f20549b0504013341b6bd8964 2 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=743c65d6d489855238d68c9f20549b0504013341b6bd8964 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OWH 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OWH 00:17:29.624 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.OWH 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fe2552535deb965b5d6fa687713a89aa 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Dml 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fe2552535deb965b5d6fa687713a89aa 1 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fe2552535deb965b5d6fa687713a89aa 1 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fe2552535deb965b5d6fa687713a89aa 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Dml 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Dml 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Dml 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1a1061fdc972bbba2b01620021788a91470d907ce28aa84c8226cda2a506c909 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Irs 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1a1061fdc972bbba2b01620021788a91470d907ce28aa84c8226cda2a506c909 3 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1a1061fdc972bbba2b01620021788a91470d907ce28aa84c8226cda2a506c909 3 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1a1061fdc972bbba2b01620021788a91470d907ce28aa84c8226cda2a506c909 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:29.625 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Irs 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Irs 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Irs 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4123652 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 4123652 ']' 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.884 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4123683 /var/tmp/host.sock 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 4123683 ']' 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:29.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.884 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Zun 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Zun 00:17:30.143 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Zun 00:17:30.402 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.H9i ]] 00:17:30.402 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H9i 00:17:30.402 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.402 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.402 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.402 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H9i 00:17:30.402 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H9i 00:17:30.660 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:30.660 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ePA 00:17:30.660 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.660 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.660 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.660 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ePA 00:17:30.660 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ePA 00:17:30.919 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.4Vg ]] 00:17:30.919 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Vg 00:17:30.919 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.919 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.919 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.919 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Vg 00:17:30.919 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Vg 00:17:31.177 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:31.177 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OWH 00:17:31.177 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.177 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.177 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.177 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OWH 00:17:31.177 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OWH 00:17:31.436 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Dml ]] 00:17:31.436 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dml 00:17:31.436 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.436 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.436 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.436 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dml 00:17:31.436 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dml 00:17:31.695 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:31.695 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Irs 00:17:31.695 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.695 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.695 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.695 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Irs 00:17:31.695 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Irs 00:17:31.953 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:31.953 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:31.953 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.953 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.953 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.953 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.211 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.212 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.212 00:17:32.471 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.471 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.471 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.730 { 00:17:32.730 "cntlid": 1, 00:17:32.730 "qid": 0, 00:17:32.730 "state": "enabled", 00:17:32.730 "thread": "nvmf_tgt_poll_group_000", 00:17:32.730 "listen_address": { 00:17:32.730 "trtype": "TCP", 00:17:32.730 "adrfam": "IPv4", 00:17:32.730 "traddr": "10.0.0.2", 00:17:32.730 "trsvcid": "4420" 00:17:32.730 }, 00:17:32.730 "peer_address": { 00:17:32.730 "trtype": "TCP", 00:17:32.730 "adrfam": "IPv4", 00:17:32.730 "traddr": "10.0.0.1", 00:17:32.730 "trsvcid": "51538" 00:17:32.730 }, 00:17:32.730 "auth": { 00:17:32.730 "state": "completed", 00:17:32.730 "digest": "sha256", 00:17:32.730 "dhgroup": "null" 00:17:32.730 } 00:17:32.730 } 00:17:32.730 ]' 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.730 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.987 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:33.959 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.527 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.786 00:17:34.786 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.786 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.786 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.046 { 00:17:35.046 "cntlid": 3, 00:17:35.046 "qid": 0, 00:17:35.046 "state": "enabled", 00:17:35.046 "thread": "nvmf_tgt_poll_group_000", 00:17:35.046 "listen_address": { 00:17:35.046 "trtype": "TCP", 00:17:35.046 "adrfam": "IPv4", 00:17:35.046 "traddr": "10.0.0.2", 00:17:35.046 "trsvcid": "4420" 00:17:35.046 }, 00:17:35.046 "peer_address": { 00:17:35.046 "trtype": "TCP", 00:17:35.046 "adrfam": "IPv4", 00:17:35.046 "traddr": "10.0.0.1", 00:17:35.046 "trsvcid": "51564" 00:17:35.046 }, 00:17:35.046 "auth": { 00:17:35.046 "state": "completed", 00:17:35.046 "digest": "sha256", 00:17:35.046 "dhgroup": "null" 00:17:35.046 } 00:17:35.046 } 00:17:35.046 ]' 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.046 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.305 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:36.242 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.500 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.759 00:17:36.759 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.759 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.759 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.017 { 00:17:37.017 "cntlid": 5, 00:17:37.017 "qid": 0, 00:17:37.017 "state": "enabled", 00:17:37.017 "thread": "nvmf_tgt_poll_group_000", 00:17:37.017 "listen_address": { 00:17:37.017 "trtype": "TCP", 00:17:37.017 "adrfam": "IPv4", 00:17:37.017 "traddr": "10.0.0.2", 00:17:37.017 "trsvcid": "4420" 00:17:37.017 }, 00:17:37.017 "peer_address": { 00:17:37.017 "trtype": "TCP", 00:17:37.017 "adrfam": "IPv4", 00:17:37.017 "traddr": "10.0.0.1", 00:17:37.017 "trsvcid": "51596" 00:17:37.017 }, 00:17:37.017 "auth": { 00:17:37.017 "state": "completed", 00:17:37.017 "digest": "sha256", 00:17:37.017 "dhgroup": "null" 00:17:37.017 } 00:17:37.017 } 00:17:37.017 ]' 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.017 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.276 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.276 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.276 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.550 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:17:38.117 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.117 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:38.117 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.117 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.376 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.635 00:17:38.635 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.635 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.635 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.893 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.893 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.893 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.893 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.893 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.893 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.893 { 00:17:38.893 "cntlid": 7, 00:17:38.893 "qid": 0, 00:17:38.893 "state": "enabled", 00:17:38.893 "thread": "nvmf_tgt_poll_group_000", 00:17:38.893 "listen_address": { 00:17:38.893 "trtype": "TCP", 00:17:38.893 "adrfam": "IPv4", 00:17:38.893 "traddr": "10.0.0.2", 00:17:38.893 "trsvcid": "4420" 00:17:38.893 }, 00:17:38.893 "peer_address": { 00:17:38.893 "trtype": "TCP", 00:17:38.893 "adrfam": "IPv4", 00:17:38.893 "traddr": "10.0.0.1", 00:17:38.893 "trsvcid": "45654" 00:17:38.893 }, 00:17:38.893 "auth": { 00:17:38.893 "state": "completed", 00:17:38.893 "digest": "sha256", 00:17:38.893 "dhgroup": "null" 00:17:38.893 } 00:17:38.893 } 00:17:38.893 ]' 00:17:38.893 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.152 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.152 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.152 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.152 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.152 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.152 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.152 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.410 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:17:39.976 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.235 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.493 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.751 00:17:40.751 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.751 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.751 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.318 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.318 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.318 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.318 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.318 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.318 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.318 { 00:17:41.318 "cntlid": 9, 00:17:41.318 "qid": 0, 00:17:41.318 "state": "enabled", 00:17:41.318 "thread": "nvmf_tgt_poll_group_000", 00:17:41.318 "listen_address": { 00:17:41.318 "trtype": "TCP", 00:17:41.318 "adrfam": "IPv4", 00:17:41.318 "traddr": "10.0.0.2", 00:17:41.318 "trsvcid": "4420" 00:17:41.318 }, 00:17:41.318 "peer_address": { 00:17:41.318 "trtype": "TCP", 00:17:41.318 "adrfam": "IPv4", 00:17:41.318 "traddr": "10.0.0.1", 00:17:41.318 "trsvcid": "45690" 00:17:41.319 }, 00:17:41.319 "auth": { 00:17:41.319 "state": "completed", 00:17:41.319 "digest": "sha256", 00:17:41.319 "dhgroup": "ffdhe2048" 00:17:41.319 } 00:17:41.319 } 00:17:41.319 ]' 00:17:41.319 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.319 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.319 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.319 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.319 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.577 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.577 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.577 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.836 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:42.403 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.968 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.227 00:17:43.227 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.227 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.227 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.486 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.486 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.486 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.486 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.745 { 00:17:43.745 "cntlid": 11, 00:17:43.745 "qid": 0, 00:17:43.745 "state": "enabled", 00:17:43.745 "thread": "nvmf_tgt_poll_group_000", 00:17:43.745 "listen_address": { 00:17:43.745 "trtype": "TCP", 00:17:43.745 "adrfam": "IPv4", 00:17:43.745 "traddr": "10.0.0.2", 00:17:43.745 "trsvcid": "4420" 00:17:43.745 }, 00:17:43.745 "peer_address": { 00:17:43.745 "trtype": "TCP", 00:17:43.745 "adrfam": "IPv4", 00:17:43.745 "traddr": "10.0.0.1", 00:17:43.745 "trsvcid": "45726" 00:17:43.745 }, 00:17:43.745 "auth": { 00:17:43.745 "state": "completed", 00:17:43.745 "digest": "sha256", 00:17:43.745 "dhgroup": "ffdhe2048" 00:17:43.745 } 00:17:43.745 } 00:17:43.745 ]' 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.745 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.004 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.571 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.830 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.397 00:17:45.397 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.397 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.397 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.656 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.656 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.656 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.656 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.916 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.916 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.916 { 00:17:45.916 "cntlid": 13, 00:17:45.916 "qid": 0, 00:17:45.916 "state": "enabled", 00:17:45.916 "thread": "nvmf_tgt_poll_group_000", 00:17:45.916 "listen_address": { 00:17:45.916 "trtype": "TCP", 00:17:45.916 "adrfam": "IPv4", 00:17:45.916 "traddr": "10.0.0.2", 00:17:45.916 "trsvcid": "4420" 00:17:45.916 }, 00:17:45.916 "peer_address": { 00:17:45.916 "trtype": "TCP", 00:17:45.916 "adrfam": "IPv4", 00:17:45.916 "traddr": "10.0.0.1", 00:17:45.916 "trsvcid": "45752" 00:17:45.916 }, 00:17:45.916 "auth": { 00:17:45.916 "state": "completed", 00:17:45.916 "digest": "sha256", 00:17:45.916 "dhgroup": "ffdhe2048" 00:17:45.916 } 00:17:45.916 } 00:17:45.916 ]' 00:17:45.916 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.916 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.916 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.916 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.916 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.916 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.916 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.916 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.491 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:47.425 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.426 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.993 00:17:47.993 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.993 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.993 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.251 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.251 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.251 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.251 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.251 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.251 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.251 { 00:17:48.251 "cntlid": 15, 00:17:48.251 "qid": 0, 00:17:48.251 "state": "enabled", 00:17:48.251 "thread": "nvmf_tgt_poll_group_000", 00:17:48.251 "listen_address": { 00:17:48.251 "trtype": "TCP", 00:17:48.251 "adrfam": "IPv4", 00:17:48.251 "traddr": "10.0.0.2", 00:17:48.251 "trsvcid": "4420" 00:17:48.251 }, 00:17:48.251 "peer_address": { 00:17:48.251 "trtype": "TCP", 00:17:48.251 "adrfam": "IPv4", 00:17:48.251 "traddr": "10.0.0.1", 00:17:48.251 "trsvcid": "34658" 00:17:48.251 }, 00:17:48.251 "auth": { 00:17:48.251 "state": "completed", 00:17:48.251 "digest": "sha256", 00:17:48.251 "dhgroup": "ffdhe2048" 00:17:48.251 } 00:17:48.251 } 00:17:48.251 ]' 00:17:48.251 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.538 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.538 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.538 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.538 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.538 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.538 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.538 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.796 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:49.364 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.622 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.190 00:17:50.190 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.190 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.190 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.190 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.190 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.449 { 00:17:50.449 "cntlid": 17, 00:17:50.449 "qid": 0, 00:17:50.449 "state": "enabled", 00:17:50.449 "thread": "nvmf_tgt_poll_group_000", 00:17:50.449 "listen_address": { 00:17:50.449 "trtype": "TCP", 00:17:50.449 "adrfam": "IPv4", 00:17:50.449 "traddr": "10.0.0.2", 00:17:50.449 "trsvcid": "4420" 00:17:50.449 }, 00:17:50.449 "peer_address": { 00:17:50.449 "trtype": "TCP", 00:17:50.449 "adrfam": "IPv4", 00:17:50.449 "traddr": "10.0.0.1", 00:17:50.449 "trsvcid": "34678" 00:17:50.449 }, 00:17:50.449 "auth": { 00:17:50.449 "state": "completed", 00:17:50.449 "digest": "sha256", 00:17:50.449 "dhgroup": "ffdhe3072" 00:17:50.449 } 00:17:50.449 } 00:17:50.449 ]' 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.449 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.707 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.643 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.210 00:17:52.210 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.210 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.210 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.468 { 00:17:52.468 "cntlid": 19, 00:17:52.468 "qid": 0, 00:17:52.468 "state": "enabled", 00:17:52.468 "thread": "nvmf_tgt_poll_group_000", 00:17:52.468 "listen_address": { 00:17:52.468 "trtype": "TCP", 00:17:52.468 "adrfam": "IPv4", 00:17:52.468 "traddr": "10.0.0.2", 00:17:52.468 "trsvcid": "4420" 00:17:52.468 }, 00:17:52.468 "peer_address": { 00:17:52.468 "trtype": "TCP", 00:17:52.468 "adrfam": "IPv4", 00:17:52.468 "traddr": "10.0.0.1", 00:17:52.468 "trsvcid": "34710" 00:17:52.468 }, 00:17:52.468 "auth": { 00:17:52.468 "state": "completed", 00:17:52.468 "digest": "sha256", 00:17:52.468 "dhgroup": "ffdhe3072" 00:17:52.468 } 00:17:52.468 } 00:17:52.468 ]' 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.468 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.726 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.662 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.921 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.178 00:17:54.178 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.178 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.178 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.437 { 00:17:54.437 "cntlid": 21, 00:17:54.437 "qid": 0, 00:17:54.437 "state": "enabled", 00:17:54.437 "thread": "nvmf_tgt_poll_group_000", 00:17:54.437 "listen_address": { 00:17:54.437 "trtype": "TCP", 00:17:54.437 "adrfam": "IPv4", 00:17:54.437 "traddr": "10.0.0.2", 00:17:54.437 "trsvcid": "4420" 00:17:54.437 }, 00:17:54.437 "peer_address": { 00:17:54.437 "trtype": "TCP", 00:17:54.437 "adrfam": "IPv4", 00:17:54.437 "traddr": "10.0.0.1", 00:17:54.437 "trsvcid": "34750" 00:17:54.437 }, 00:17:54.437 "auth": { 00:17:54.437 "state": "completed", 00:17:54.437 "digest": "sha256", 00:17:54.437 "dhgroup": "ffdhe3072" 00:17:54.437 } 00:17:54.437 } 00:17:54.437 ]' 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.437 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.696 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.696 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.696 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.696 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.696 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.954 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.522 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.781 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.348 00:17:56.348 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.348 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.348 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.606 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.606 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.606 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.606 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.606 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.606 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.606 { 00:17:56.606 "cntlid": 23, 00:17:56.606 "qid": 0, 00:17:56.606 "state": "enabled", 00:17:56.606 "thread": "nvmf_tgt_poll_group_000", 00:17:56.606 "listen_address": { 00:17:56.606 "trtype": "TCP", 00:17:56.606 "adrfam": "IPv4", 00:17:56.606 "traddr": "10.0.0.2", 00:17:56.606 "trsvcid": "4420" 00:17:56.606 }, 00:17:56.606 "peer_address": { 00:17:56.607 "trtype": "TCP", 00:17:56.607 "adrfam": "IPv4", 00:17:56.607 "traddr": "10.0.0.1", 00:17:56.607 "trsvcid": "34792" 00:17:56.607 }, 00:17:56.607 "auth": { 00:17:56.607 "state": "completed", 00:17:56.607 "digest": "sha256", 00:17:56.607 "dhgroup": "ffdhe3072" 00:17:56.607 } 00:17:56.607 } 00:17:56.607 ]' 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.607 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.866 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.801 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.059 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:58.059 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.059 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.059 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.059 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.059 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.060 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.060 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.060 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.060 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.060 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.060 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.350 00:17:58.350 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.350 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.350 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.608 { 00:17:58.608 "cntlid": 25, 00:17:58.608 "qid": 0, 00:17:58.608 "state": "enabled", 00:17:58.608 "thread": "nvmf_tgt_poll_group_000", 00:17:58.608 "listen_address": { 00:17:58.608 "trtype": "TCP", 00:17:58.608 "adrfam": "IPv4", 00:17:58.608 "traddr": "10.0.0.2", 00:17:58.608 "trsvcid": "4420" 00:17:58.608 }, 00:17:58.608 "peer_address": { 00:17:58.608 "trtype": "TCP", 00:17:58.608 "adrfam": "IPv4", 00:17:58.608 "traddr": "10.0.0.1", 00:17:58.608 "trsvcid": "41072" 00:17:58.608 }, 00:17:58.608 "auth": { 00:17:58.608 "state": "completed", 00:17:58.608 "digest": "sha256", 00:17:58.608 "dhgroup": "ffdhe4096" 00:17:58.608 } 00:17:58.608 } 00:17:58.608 ]' 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.608 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.866 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:59.801 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.060 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.318 00:18:00.318 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.318 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.318 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.577 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.577 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.577 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.577 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.577 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.577 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.577 { 00:18:00.577 "cntlid": 27, 00:18:00.577 "qid": 0, 00:18:00.577 "state": "enabled", 00:18:00.577 "thread": "nvmf_tgt_poll_group_000", 00:18:00.577 "listen_address": { 00:18:00.577 "trtype": "TCP", 00:18:00.577 "adrfam": "IPv4", 00:18:00.577 "traddr": "10.0.0.2", 00:18:00.577 "trsvcid": "4420" 00:18:00.577 }, 00:18:00.577 "peer_address": { 00:18:00.577 "trtype": "TCP", 00:18:00.577 "adrfam": "IPv4", 00:18:00.577 "traddr": "10.0.0.1", 00:18:00.577 "trsvcid": "41104" 00:18:00.577 }, 00:18:00.577 "auth": { 00:18:00.577 "state": "completed", 00:18:00.577 "digest": "sha256", 00:18:00.577 "dhgroup": "ffdhe4096" 00:18:00.577 } 00:18:00.577 } 00:18:00.577 ]' 00:18:00.836 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.836 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.836 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.836 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.836 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.836 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.836 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.836 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.095 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.038 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.297 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.865 00:18:02.865 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.865 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.865 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.865 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.865 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.865 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.865 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.865 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.865 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.865 { 00:18:02.865 "cntlid": 29, 00:18:02.865 "qid": 0, 00:18:02.865 "state": "enabled", 00:18:02.865 "thread": "nvmf_tgt_poll_group_000", 00:18:02.865 "listen_address": { 00:18:02.865 "trtype": "TCP", 00:18:02.865 "adrfam": "IPv4", 00:18:02.865 "traddr": "10.0.0.2", 00:18:02.865 "trsvcid": "4420" 00:18:02.865 }, 00:18:02.865 "peer_address": { 00:18:02.865 "trtype": "TCP", 00:18:02.865 "adrfam": "IPv4", 00:18:02.865 "traddr": "10.0.0.1", 00:18:02.865 "trsvcid": "41134" 00:18:02.865 }, 00:18:02.865 "auth": { 00:18:02.865 "state": "completed", 00:18:02.865 "digest": "sha256", 00:18:02.865 "dhgroup": "ffdhe4096" 00:18:02.865 } 00:18:02.865 } 00:18:02.865 ]' 00:18:03.124 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.124 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.124 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.124 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.124 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.125 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.125 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.125 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.388 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:18:04.403 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.403 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:04.403 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.403 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.403 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.403 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.404 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.971 00:18:04.971 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.971 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.971 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.229 { 00:18:05.229 "cntlid": 31, 00:18:05.229 "qid": 0, 00:18:05.229 "state": "enabled", 00:18:05.229 "thread": "nvmf_tgt_poll_group_000", 00:18:05.229 "listen_address": { 00:18:05.229 "trtype": "TCP", 00:18:05.229 "adrfam": "IPv4", 00:18:05.229 "traddr": "10.0.0.2", 00:18:05.229 "trsvcid": "4420" 00:18:05.229 }, 00:18:05.229 "peer_address": { 00:18:05.229 "trtype": "TCP", 00:18:05.229 "adrfam": "IPv4", 00:18:05.229 "traddr": "10.0.0.1", 00:18:05.229 "trsvcid": "41150" 00:18:05.229 }, 00:18:05.229 "auth": { 00:18:05.229 "state": "completed", 00:18:05.229 "digest": "sha256", 00:18:05.229 "dhgroup": "ffdhe4096" 00:18:05.229 } 00:18:05.229 } 00:18:05.229 ]' 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.229 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.230 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.230 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.230 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.230 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.230 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.230 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.488 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.425 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.426 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.684 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.252 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.252 { 00:18:07.252 "cntlid": 33, 00:18:07.252 "qid": 0, 00:18:07.252 "state": "enabled", 00:18:07.252 "thread": "nvmf_tgt_poll_group_000", 00:18:07.252 "listen_address": { 00:18:07.252 "trtype": "TCP", 00:18:07.252 "adrfam": "IPv4", 00:18:07.252 "traddr": "10.0.0.2", 00:18:07.252 "trsvcid": "4420" 00:18:07.252 }, 00:18:07.252 "peer_address": { 00:18:07.252 "trtype": "TCP", 00:18:07.252 "adrfam": "IPv4", 00:18:07.252 "traddr": "10.0.0.1", 00:18:07.252 "trsvcid": "60710" 00:18:07.252 }, 00:18:07.252 "auth": { 00:18:07.252 "state": "completed", 00:18:07.252 "digest": "sha256", 00:18:07.252 "dhgroup": "ffdhe6144" 00:18:07.252 } 00:18:07.252 } 00:18:07.252 ]' 00:18:07.252 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.511 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.511 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.511 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.511 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.511 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.511 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.511 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.770 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.709 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.968 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.227 00:18:09.227 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.227 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.227 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.486 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.486 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.486 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.486 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.486 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.487 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.487 { 00:18:09.487 "cntlid": 35, 00:18:09.487 "qid": 0, 00:18:09.487 "state": "enabled", 00:18:09.487 "thread": "nvmf_tgt_poll_group_000", 00:18:09.487 "listen_address": { 00:18:09.487 "trtype": "TCP", 00:18:09.487 "adrfam": "IPv4", 00:18:09.487 "traddr": "10.0.0.2", 00:18:09.487 "trsvcid": "4420" 00:18:09.487 }, 00:18:09.487 "peer_address": { 00:18:09.487 "trtype": "TCP", 00:18:09.487 "adrfam": "IPv4", 00:18:09.487 "traddr": "10.0.0.1", 00:18:09.487 "trsvcid": "60734" 00:18:09.487 }, 00:18:09.487 "auth": { 00:18:09.487 "state": "completed", 00:18:09.487 "digest": "sha256", 00:18:09.487 "dhgroup": "ffdhe6144" 00:18:09.487 } 00:18:09.487 } 00:18:09.487 ]' 00:18:09.487 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.745 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.745 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.745 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.745 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.745 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.745 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.745 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.004 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.940 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.940 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.199 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.458 00:18:11.458 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.458 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.458 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.026 { 00:18:12.026 "cntlid": 37, 00:18:12.026 "qid": 0, 00:18:12.026 "state": "enabled", 00:18:12.026 "thread": "nvmf_tgt_poll_group_000", 00:18:12.026 "listen_address": { 00:18:12.026 "trtype": "TCP", 00:18:12.026 "adrfam": "IPv4", 00:18:12.026 "traddr": "10.0.0.2", 00:18:12.026 "trsvcid": "4420" 00:18:12.026 }, 00:18:12.026 "peer_address": { 00:18:12.026 "trtype": "TCP", 00:18:12.026 "adrfam": "IPv4", 00:18:12.026 "traddr": "10.0.0.1", 00:18:12.026 "trsvcid": "60754" 00:18:12.026 }, 00:18:12.026 "auth": { 00:18:12.026 "state": "completed", 00:18:12.026 "digest": "sha256", 00:18:12.026 "dhgroup": "ffdhe6144" 00:18:12.026 } 00:18:12.026 } 00:18:12.026 ]' 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.026 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.284 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.219 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.478 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.738 00:18:13.738 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.738 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.738 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.306 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.306 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.306 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.306 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.306 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.306 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.306 { 00:18:14.306 "cntlid": 39, 00:18:14.307 "qid": 0, 00:18:14.307 "state": "enabled", 00:18:14.307 "thread": "nvmf_tgt_poll_group_000", 00:18:14.307 "listen_address": { 00:18:14.307 "trtype": "TCP", 00:18:14.307 "adrfam": "IPv4", 00:18:14.307 "traddr": "10.0.0.2", 00:18:14.307 "trsvcid": "4420" 00:18:14.307 }, 00:18:14.307 "peer_address": { 00:18:14.307 "trtype": "TCP", 00:18:14.307 "adrfam": "IPv4", 00:18:14.307 "traddr": "10.0.0.1", 00:18:14.307 "trsvcid": "60782" 00:18:14.307 }, 00:18:14.307 "auth": { 00:18:14.307 "state": "completed", 00:18:14.307 "digest": "sha256", 00:18:14.307 "dhgroup": "ffdhe6144" 00:18:14.307 } 00:18:14.307 } 00:18:14.307 ]' 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.307 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.565 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.502 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.761 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.329 00:18:16.329 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.329 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.329 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.587 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.587 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.587 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.587 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.587 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.587 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.587 { 00:18:16.587 "cntlid": 41, 00:18:16.587 "qid": 0, 00:18:16.587 "state": "enabled", 00:18:16.587 "thread": "nvmf_tgt_poll_group_000", 00:18:16.587 "listen_address": { 00:18:16.587 "trtype": "TCP", 00:18:16.587 "adrfam": "IPv4", 00:18:16.587 "traddr": "10.0.0.2", 00:18:16.587 "trsvcid": "4420" 00:18:16.587 }, 00:18:16.587 "peer_address": { 00:18:16.587 "trtype": "TCP", 00:18:16.587 "adrfam": "IPv4", 00:18:16.587 "traddr": "10.0.0.1", 00:18:16.587 "trsvcid": "60800" 00:18:16.587 }, 00:18:16.587 "auth": { 00:18:16.587 "state": "completed", 00:18:16.587 "digest": "sha256", 00:18:16.587 "dhgroup": "ffdhe8192" 00:18:16.587 } 00:18:16.587 } 00:18:16.587 ]' 00:18:16.587 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.846 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.846 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.846 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.846 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.846 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.846 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.846 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.105 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.041 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.001 00:18:19.001 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.001 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.001 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.271 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.271 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.271 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.271 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.271 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.271 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.271 { 00:18:19.271 "cntlid": 43, 00:18:19.271 "qid": 0, 00:18:19.271 "state": "enabled", 00:18:19.271 "thread": "nvmf_tgt_poll_group_000", 00:18:19.271 "listen_address": { 00:18:19.271 "trtype": "TCP", 00:18:19.271 "adrfam": "IPv4", 00:18:19.271 "traddr": "10.0.0.2", 00:18:19.271 "trsvcid": "4420" 00:18:19.271 }, 00:18:19.271 "peer_address": { 00:18:19.271 "trtype": "TCP", 00:18:19.272 "adrfam": "IPv4", 00:18:19.272 "traddr": "10.0.0.1", 00:18:19.272 "trsvcid": "37896" 00:18:19.272 }, 00:18:19.272 "auth": { 00:18:19.272 "state": "completed", 00:18:19.272 "digest": "sha256", 00:18:19.272 "dhgroup": "ffdhe8192" 00:18:19.272 } 00:18:19.272 } 00:18:19.272 ]' 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.272 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.530 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.467 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.726 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.293 00:18:21.552 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.552 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.552 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.811 { 00:18:21.811 "cntlid": 45, 00:18:21.811 "qid": 0, 00:18:21.811 "state": "enabled", 00:18:21.811 "thread": "nvmf_tgt_poll_group_000", 00:18:21.811 "listen_address": { 00:18:21.811 "trtype": "TCP", 00:18:21.811 "adrfam": "IPv4", 00:18:21.811 "traddr": "10.0.0.2", 00:18:21.811 "trsvcid": "4420" 00:18:21.811 }, 00:18:21.811 "peer_address": { 00:18:21.811 "trtype": "TCP", 00:18:21.811 "adrfam": "IPv4", 00:18:21.811 "traddr": "10.0.0.1", 00:18:21.811 "trsvcid": "37928" 00:18:21.811 }, 00:18:21.811 "auth": { 00:18:21.811 "state": "completed", 00:18:21.811 "digest": "sha256", 00:18:21.811 "dhgroup": "ffdhe8192" 00:18:21.811 } 00:18:21.811 } 00:18:21.811 ]' 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.811 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.811 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.811 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.811 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.070 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.007 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.943 00:18:23.943 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.943 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.943 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.943 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.943 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.943 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.943 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.943 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.943 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.943 { 00:18:23.943 "cntlid": 47, 00:18:23.943 "qid": 0, 00:18:23.943 "state": "enabled", 00:18:23.943 "thread": "nvmf_tgt_poll_group_000", 00:18:23.943 "listen_address": { 00:18:23.943 "trtype": "TCP", 00:18:23.943 "adrfam": "IPv4", 00:18:23.943 "traddr": "10.0.0.2", 00:18:23.943 "trsvcid": "4420" 00:18:23.943 }, 00:18:23.943 "peer_address": { 00:18:23.943 "trtype": "TCP", 00:18:23.943 "adrfam": "IPv4", 00:18:23.943 "traddr": "10.0.0.1", 00:18:23.943 "trsvcid": "37946" 00:18:23.943 }, 00:18:23.943 "auth": { 00:18:23.943 "state": "completed", 00:18:23.943 "digest": "sha256", 00:18:23.943 "dhgroup": "ffdhe8192" 00:18:23.943 } 00:18:23.943 } 00:18:23.943 ]' 00:18:23.943 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.202 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.202 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.202 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.202 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.202 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.202 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.202 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.460 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.396 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.397 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.965 00:18:25.965 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.965 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.965 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.965 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.965 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.965 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.965 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.236 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.236 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.236 { 00:18:26.236 "cntlid": 49, 00:18:26.236 "qid": 0, 00:18:26.236 "state": "enabled", 00:18:26.236 "thread": "nvmf_tgt_poll_group_000", 00:18:26.236 "listen_address": { 00:18:26.236 "trtype": "TCP", 00:18:26.236 "adrfam": "IPv4", 00:18:26.236 "traddr": "10.0.0.2", 00:18:26.236 "trsvcid": "4420" 00:18:26.236 }, 00:18:26.236 "peer_address": { 00:18:26.236 "trtype": "TCP", 00:18:26.236 "adrfam": "IPv4", 00:18:26.236 "traddr": "10.0.0.1", 00:18:26.236 "trsvcid": "37982" 00:18:26.236 }, 00:18:26.236 "auth": { 00:18:26.236 "state": "completed", 00:18:26.236 "digest": "sha384", 00:18:26.236 "dhgroup": "null" 00:18:26.236 } 00:18:26.236 } 00:18:26.236 ]' 00:18:26.236 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.236 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.236 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.236 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.237 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.237 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.237 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.237 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.496 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:27.431 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:27.689 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:27.689 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.689 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.689 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.690 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.948 00:18:27.948 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.948 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.948 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.207 { 00:18:28.207 "cntlid": 51, 00:18:28.207 "qid": 0, 00:18:28.207 "state": "enabled", 00:18:28.207 "thread": "nvmf_tgt_poll_group_000", 00:18:28.207 "listen_address": { 00:18:28.207 "trtype": "TCP", 00:18:28.207 "adrfam": "IPv4", 00:18:28.207 "traddr": "10.0.0.2", 00:18:28.207 "trsvcid": "4420" 00:18:28.207 }, 00:18:28.207 "peer_address": { 00:18:28.207 "trtype": "TCP", 00:18:28.207 "adrfam": "IPv4", 00:18:28.207 "traddr": "10.0.0.1", 00:18:28.207 "trsvcid": "42158" 00:18:28.207 }, 00:18:28.207 "auth": { 00:18:28.207 "state": "completed", 00:18:28.207 "digest": "sha384", 00:18:28.207 "dhgroup": "null" 00:18:28.207 } 00:18:28.207 } 00:18:28.207 ]' 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.207 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.467 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.844 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.844 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.103 00:18:30.103 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.103 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.103 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.361 { 00:18:30.361 "cntlid": 53, 00:18:30.361 "qid": 0, 00:18:30.361 "state": "enabled", 00:18:30.361 "thread": "nvmf_tgt_poll_group_000", 00:18:30.361 "listen_address": { 00:18:30.361 "trtype": "TCP", 00:18:30.361 "adrfam": "IPv4", 00:18:30.361 "traddr": "10.0.0.2", 00:18:30.361 "trsvcid": "4420" 00:18:30.361 }, 00:18:30.361 "peer_address": { 00:18:30.361 "trtype": "TCP", 00:18:30.361 "adrfam": "IPv4", 00:18:30.361 "traddr": "10.0.0.1", 00:18:30.361 "trsvcid": "42192" 00:18:30.361 }, 00:18:30.361 "auth": { 00:18:30.361 "state": "completed", 00:18:30.361 "digest": "sha384", 00:18:30.361 "dhgroup": "null" 00:18:30.361 } 00:18:30.361 } 00:18:30.361 ]' 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.361 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.620 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:30.620 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.620 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.620 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.620 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.190 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.128 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.387 00:18:32.387 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.387 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.387 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.954 { 00:18:32.954 "cntlid": 55, 00:18:32.954 "qid": 0, 00:18:32.954 "state": "enabled", 00:18:32.954 "thread": "nvmf_tgt_poll_group_000", 00:18:32.954 "listen_address": { 00:18:32.954 "trtype": "TCP", 00:18:32.954 "adrfam": "IPv4", 00:18:32.954 "traddr": "10.0.0.2", 00:18:32.954 "trsvcid": "4420" 00:18:32.954 }, 00:18:32.954 "peer_address": { 00:18:32.954 "trtype": "TCP", 00:18:32.954 "adrfam": "IPv4", 00:18:32.954 "traddr": "10.0.0.1", 00:18:32.954 "trsvcid": "42206" 00:18:32.954 }, 00:18:32.954 "auth": { 00:18:32.954 "state": "completed", 00:18:32.954 "digest": "sha384", 00:18:32.954 "dhgroup": "null" 00:18:32.954 } 00:18:32.954 } 00:18:32.954 ]' 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.954 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.213 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.213 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.213 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.213 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.213 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.471 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.417 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.725 00:18:34.725 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.725 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.725 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.983 { 00:18:34.983 "cntlid": 57, 00:18:34.983 "qid": 0, 00:18:34.983 "state": "enabled", 00:18:34.983 "thread": "nvmf_tgt_poll_group_000", 00:18:34.983 "listen_address": { 00:18:34.983 "trtype": "TCP", 00:18:34.983 "adrfam": "IPv4", 00:18:34.983 "traddr": "10.0.0.2", 00:18:34.983 "trsvcid": "4420" 00:18:34.983 }, 00:18:34.983 "peer_address": { 00:18:34.983 "trtype": "TCP", 00:18:34.983 "adrfam": "IPv4", 00:18:34.983 "traddr": "10.0.0.1", 00:18:34.983 "trsvcid": "42228" 00:18:34.983 }, 00:18:34.983 "auth": { 00:18:34.983 "state": "completed", 00:18:34.983 "digest": "sha384", 00:18:34.983 "dhgroup": "ffdhe2048" 00:18:34.983 } 00:18:34.983 } 00:18:34.983 ]' 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.983 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.242 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.242 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.242 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.242 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.242 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.501 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:18:36.434 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.434 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:36.434 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.434 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.434 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.434 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.434 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.435 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.002 00:18:37.002 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.002 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.002 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.283 { 00:18:37.283 "cntlid": 59, 00:18:37.283 "qid": 0, 00:18:37.283 "state": "enabled", 00:18:37.283 "thread": "nvmf_tgt_poll_group_000", 00:18:37.283 "listen_address": { 00:18:37.283 "trtype": "TCP", 00:18:37.283 "adrfam": "IPv4", 00:18:37.283 "traddr": "10.0.0.2", 00:18:37.283 "trsvcid": "4420" 00:18:37.283 }, 00:18:37.283 "peer_address": { 00:18:37.283 "trtype": "TCP", 00:18:37.283 "adrfam": "IPv4", 00:18:37.283 "traddr": "10.0.0.1", 00:18:37.283 "trsvcid": "33894" 00:18:37.283 }, 00:18:37.283 "auth": { 00:18:37.283 "state": "completed", 00:18:37.283 "digest": "sha384", 00:18:37.283 "dhgroup": "ffdhe2048" 00:18:37.283 } 00:18:37.283 } 00:18:37.283 ]' 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.283 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.542 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.480 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.739 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.997 00:18:38.997 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.997 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.997 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.255 { 00:18:39.255 "cntlid": 61, 00:18:39.255 "qid": 0, 00:18:39.255 "state": "enabled", 00:18:39.255 "thread": "nvmf_tgt_poll_group_000", 00:18:39.255 "listen_address": { 00:18:39.255 "trtype": "TCP", 00:18:39.255 "adrfam": "IPv4", 00:18:39.255 "traddr": "10.0.0.2", 00:18:39.255 "trsvcid": "4420" 00:18:39.255 }, 00:18:39.255 "peer_address": { 00:18:39.255 "trtype": "TCP", 00:18:39.255 "adrfam": "IPv4", 00:18:39.255 "traddr": "10.0.0.1", 00:18:39.255 "trsvcid": "33930" 00:18:39.255 }, 00:18:39.255 "auth": { 00:18:39.255 "state": "completed", 00:18:39.255 "digest": "sha384", 00:18:39.255 "dhgroup": "ffdhe2048" 00:18:39.255 } 00:18:39.255 } 00:18:39.255 ]' 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.255 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.513 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.513 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.513 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.771 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.708 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.966 00:18:40.966 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.966 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.966 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.533 { 00:18:41.533 "cntlid": 63, 00:18:41.533 "qid": 0, 00:18:41.533 "state": "enabled", 00:18:41.533 "thread": "nvmf_tgt_poll_group_000", 00:18:41.533 "listen_address": { 00:18:41.533 "trtype": "TCP", 00:18:41.533 "adrfam": "IPv4", 00:18:41.533 "traddr": "10.0.0.2", 00:18:41.533 "trsvcid": "4420" 00:18:41.533 }, 00:18:41.533 "peer_address": { 00:18:41.533 "trtype": "TCP", 00:18:41.533 "adrfam": "IPv4", 00:18:41.533 "traddr": "10.0.0.1", 00:18:41.533 "trsvcid": "33958" 00:18:41.533 }, 00:18:41.533 "auth": { 00:18:41.533 "state": "completed", 00:18:41.533 "digest": "sha384", 00:18:41.533 "dhgroup": "ffdhe2048" 00:18:41.533 } 00:18:41.533 } 00:18:41.533 ]' 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.533 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.790 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.790 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.790 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.790 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.790 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.048 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.616 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.875 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.441 00:18:43.441 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.441 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.441 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.700 { 00:18:43.700 "cntlid": 65, 00:18:43.700 "qid": 0, 00:18:43.700 "state": "enabled", 00:18:43.700 "thread": "nvmf_tgt_poll_group_000", 00:18:43.700 "listen_address": { 00:18:43.700 "trtype": "TCP", 00:18:43.700 "adrfam": "IPv4", 00:18:43.700 "traddr": "10.0.0.2", 00:18:43.700 "trsvcid": "4420" 00:18:43.700 }, 00:18:43.700 "peer_address": { 00:18:43.700 "trtype": "TCP", 00:18:43.700 "adrfam": "IPv4", 00:18:43.700 "traddr": "10.0.0.1", 00:18:43.700 "trsvcid": "33990" 00:18:43.700 }, 00:18:43.700 "auth": { 00:18:43.700 "state": "completed", 00:18:43.700 "digest": "sha384", 00:18:43.700 "dhgroup": "ffdhe3072" 00:18:43.700 } 00:18:43.700 } 00:18:43.700 ]' 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.700 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.958 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.894 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.152 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.410 00:18:45.410 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.410 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.410 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.669 { 00:18:45.669 "cntlid": 67, 00:18:45.669 "qid": 0, 00:18:45.669 "state": "enabled", 00:18:45.669 "thread": "nvmf_tgt_poll_group_000", 00:18:45.669 "listen_address": { 00:18:45.669 "trtype": "TCP", 00:18:45.669 "adrfam": "IPv4", 00:18:45.669 "traddr": "10.0.0.2", 00:18:45.669 "trsvcid": "4420" 00:18:45.669 }, 00:18:45.669 "peer_address": { 00:18:45.669 "trtype": "TCP", 00:18:45.669 "adrfam": "IPv4", 00:18:45.669 "traddr": "10.0.0.1", 00:18:45.669 "trsvcid": "34026" 00:18:45.669 }, 00:18:45.669 "auth": { 00:18:45.669 "state": "completed", 00:18:45.669 "digest": "sha384", 00:18:45.669 "dhgroup": "ffdhe3072" 00:18:45.669 } 00:18:45.669 } 00:18:45.669 ]' 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.669 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.927 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.927 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.927 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.927 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.865 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.123 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.382 00:18:47.382 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.382 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.382 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.640 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.640 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.640 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.640 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.640 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.640 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.640 { 00:18:47.640 "cntlid": 69, 00:18:47.640 "qid": 0, 00:18:47.640 "state": "enabled", 00:18:47.640 "thread": "nvmf_tgt_poll_group_000", 00:18:47.640 "listen_address": { 00:18:47.640 "trtype": "TCP", 00:18:47.640 "adrfam": "IPv4", 00:18:47.640 "traddr": "10.0.0.2", 00:18:47.640 "trsvcid": "4420" 00:18:47.640 }, 00:18:47.640 "peer_address": { 00:18:47.640 "trtype": "TCP", 00:18:47.640 "adrfam": "IPv4", 00:18:47.640 "traddr": "10.0.0.1", 00:18:47.640 "trsvcid": "36604" 00:18:47.640 }, 00:18:47.640 "auth": { 00:18:47.640 "state": "completed", 00:18:47.640 "digest": "sha384", 00:18:47.640 "dhgroup": "ffdhe3072" 00:18:47.640 } 00:18:47.640 } 00:18:47.640 ]' 00:18:47.640 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.898 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.898 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.898 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.898 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.898 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.898 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.898 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.157 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:18:49.095 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.095 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:49.095 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.095 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.096 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.731 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.731 { 00:18:49.731 "cntlid": 71, 00:18:49.731 "qid": 0, 00:18:49.731 "state": "enabled", 00:18:49.731 "thread": "nvmf_tgt_poll_group_000", 00:18:49.731 "listen_address": { 00:18:49.731 "trtype": "TCP", 00:18:49.731 "adrfam": "IPv4", 00:18:49.731 "traddr": "10.0.0.2", 00:18:49.731 "trsvcid": "4420" 00:18:49.731 }, 00:18:49.731 "peer_address": { 00:18:49.731 "trtype": "TCP", 00:18:49.731 "adrfam": "IPv4", 00:18:49.731 "traddr": "10.0.0.1", 00:18:49.731 "trsvcid": "36644" 00:18:49.731 }, 00:18:49.731 "auth": { 00:18:49.731 "state": "completed", 00:18:49.731 "digest": "sha384", 00:18:49.731 "dhgroup": "ffdhe3072" 00:18:49.731 } 00:18:49.731 } 00:18:49.731 ]' 00:18:49.731 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.731 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.731 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.989 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.989 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.989 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.989 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.989 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.247 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:18:51.180 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.181 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.439 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.439 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.439 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.697 00:18:51.697 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.697 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.697 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.955 { 00:18:51.955 "cntlid": 73, 00:18:51.955 "qid": 0, 00:18:51.955 "state": "enabled", 00:18:51.955 "thread": "nvmf_tgt_poll_group_000", 00:18:51.955 "listen_address": { 00:18:51.955 "trtype": "TCP", 00:18:51.955 "adrfam": "IPv4", 00:18:51.955 "traddr": "10.0.0.2", 00:18:51.955 "trsvcid": "4420" 00:18:51.955 }, 00:18:51.955 "peer_address": { 00:18:51.955 "trtype": "TCP", 00:18:51.955 "adrfam": "IPv4", 00:18:51.955 "traddr": "10.0.0.1", 00:18:51.955 "trsvcid": "36674" 00:18:51.955 }, 00:18:51.955 "auth": { 00:18:51.955 "state": "completed", 00:18:51.955 "digest": "sha384", 00:18:51.955 "dhgroup": "ffdhe4096" 00:18:51.955 } 00:18:51.955 } 00:18:51.955 ]' 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.955 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.213 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.589 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.848 00:18:53.848 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.848 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.848 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.105 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.105 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.105 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.105 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.105 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.105 { 00:18:54.105 "cntlid": 75, 00:18:54.105 "qid": 0, 00:18:54.105 "state": "enabled", 00:18:54.105 "thread": "nvmf_tgt_poll_group_000", 00:18:54.105 "listen_address": { 00:18:54.105 "trtype": "TCP", 00:18:54.105 "adrfam": "IPv4", 00:18:54.105 "traddr": "10.0.0.2", 00:18:54.105 "trsvcid": "4420" 00:18:54.105 }, 00:18:54.105 "peer_address": { 00:18:54.105 "trtype": "TCP", 00:18:54.105 "adrfam": "IPv4", 00:18:54.105 "traddr": "10.0.0.1", 00:18:54.105 "trsvcid": "36704" 00:18:54.105 }, 00:18:54.105 "auth": { 00:18:54.106 "state": "completed", 00:18:54.106 "digest": "sha384", 00:18:54.106 "dhgroup": "ffdhe4096" 00:18:54.106 } 00:18:54.106 } 00:18:54.106 ]' 00:18:54.106 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.364 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.364 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.364 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.364 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.364 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.364 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.364 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.622 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:55.558 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.559 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.126 00:18:56.126 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.126 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.126 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.126 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.126 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.126 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.126 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.383 { 00:18:56.383 "cntlid": 77, 00:18:56.383 "qid": 0, 00:18:56.383 "state": "enabled", 00:18:56.383 "thread": "nvmf_tgt_poll_group_000", 00:18:56.383 "listen_address": { 00:18:56.383 "trtype": "TCP", 00:18:56.383 "adrfam": "IPv4", 00:18:56.383 "traddr": "10.0.0.2", 00:18:56.383 "trsvcid": "4420" 00:18:56.383 }, 00:18:56.383 "peer_address": { 00:18:56.383 "trtype": "TCP", 00:18:56.383 "adrfam": "IPv4", 00:18:56.383 "traddr": "10.0.0.1", 00:18:56.383 "trsvcid": "36738" 00:18:56.383 }, 00:18:56.383 "auth": { 00:18:56.383 "state": "completed", 00:18:56.383 "digest": "sha384", 00:18:56.383 "dhgroup": "ffdhe4096" 00:18:56.383 } 00:18:56.383 } 00:18:56.383 ]' 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.383 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.640 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:57.575 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.834 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.400 00:18:58.400 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.400 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.400 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.658 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.658 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.658 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.659 { 00:18:58.659 "cntlid": 79, 00:18:58.659 "qid": 0, 00:18:58.659 "state": "enabled", 00:18:58.659 "thread": "nvmf_tgt_poll_group_000", 00:18:58.659 "listen_address": { 00:18:58.659 "trtype": "TCP", 00:18:58.659 "adrfam": "IPv4", 00:18:58.659 "traddr": "10.0.0.2", 00:18:58.659 "trsvcid": "4420" 00:18:58.659 }, 00:18:58.659 "peer_address": { 00:18:58.659 "trtype": "TCP", 00:18:58.659 "adrfam": "IPv4", 00:18:58.659 "traddr": "10.0.0.1", 00:18:58.659 "trsvcid": "60292" 00:18:58.659 }, 00:18:58.659 "auth": { 00:18:58.659 "state": "completed", 00:18:58.659 "digest": "sha384", 00:18:58.659 "dhgroup": "ffdhe4096" 00:18:58.659 } 00:18:58.659 } 00:18:58.659 ]' 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.659 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.917 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:18:59.855 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.855 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.114 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.681 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.681 { 00:19:00.681 "cntlid": 81, 00:19:00.681 "qid": 0, 00:19:00.681 "state": "enabled", 00:19:00.681 "thread": "nvmf_tgt_poll_group_000", 00:19:00.681 "listen_address": { 00:19:00.681 "trtype": "TCP", 00:19:00.681 "adrfam": "IPv4", 00:19:00.681 "traddr": "10.0.0.2", 00:19:00.681 "trsvcid": "4420" 00:19:00.681 }, 00:19:00.681 "peer_address": { 00:19:00.681 "trtype": "TCP", 00:19:00.681 "adrfam": "IPv4", 00:19:00.681 "traddr": "10.0.0.1", 00:19:00.681 "trsvcid": "60308" 00:19:00.681 }, 00:19:00.681 "auth": { 00:19:00.681 "state": "completed", 00:19:00.681 "digest": "sha384", 00:19:00.681 "dhgroup": "ffdhe6144" 00:19:00.681 } 00:19:00.681 } 00:19:00.681 ]' 00:19:00.681 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.940 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.940 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.940 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.940 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.940 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.940 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.940 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.199 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.136 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.704 00:19:02.704 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.704 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.704 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.962 { 00:19:02.962 "cntlid": 83, 00:19:02.962 "qid": 0, 00:19:02.962 "state": "enabled", 00:19:02.962 "thread": "nvmf_tgt_poll_group_000", 00:19:02.962 "listen_address": { 00:19:02.962 "trtype": "TCP", 00:19:02.962 "adrfam": "IPv4", 00:19:02.962 "traddr": "10.0.0.2", 00:19:02.962 "trsvcid": "4420" 00:19:02.962 }, 00:19:02.962 "peer_address": { 00:19:02.962 "trtype": "TCP", 00:19:02.962 "adrfam": "IPv4", 00:19:02.962 "traddr": "10.0.0.1", 00:19:02.962 "trsvcid": "60326" 00:19:02.962 }, 00:19:02.962 "auth": { 00:19:02.962 "state": "completed", 00:19:02.962 "digest": "sha384", 00:19:02.962 "dhgroup": "ffdhe6144" 00:19:02.962 } 00:19:02.962 } 00:19:02.962 ]' 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.962 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.220 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.220 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.220 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.220 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.220 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.220 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.478 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.414 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.672 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.673 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.274 00:19:05.275 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.275 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.275 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.533 { 00:19:05.533 "cntlid": 85, 00:19:05.533 "qid": 0, 00:19:05.533 "state": "enabled", 00:19:05.533 "thread": "nvmf_tgt_poll_group_000", 00:19:05.533 "listen_address": { 00:19:05.533 "trtype": "TCP", 00:19:05.533 "adrfam": "IPv4", 00:19:05.533 "traddr": "10.0.0.2", 00:19:05.533 "trsvcid": "4420" 00:19:05.533 }, 00:19:05.533 "peer_address": { 00:19:05.533 "trtype": "TCP", 00:19:05.533 "adrfam": "IPv4", 00:19:05.533 "traddr": "10.0.0.1", 00:19:05.533 "trsvcid": "60350" 00:19:05.533 }, 00:19:05.533 "auth": { 00:19:05.533 "state": "completed", 00:19:05.533 "digest": "sha384", 00:19:05.533 "dhgroup": "ffdhe6144" 00:19:05.533 } 00:19:05.533 } 00:19:05.533 ]' 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.533 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.792 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.167 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.734 00:19:07.734 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.734 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.734 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.992 { 00:19:07.992 "cntlid": 87, 00:19:07.992 "qid": 0, 00:19:07.992 "state": "enabled", 00:19:07.992 "thread": "nvmf_tgt_poll_group_000", 00:19:07.992 "listen_address": { 00:19:07.992 "trtype": "TCP", 00:19:07.992 "adrfam": "IPv4", 00:19:07.992 "traddr": "10.0.0.2", 00:19:07.992 "trsvcid": "4420" 00:19:07.992 }, 00:19:07.992 "peer_address": { 00:19:07.992 "trtype": "TCP", 00:19:07.992 "adrfam": "IPv4", 00:19:07.992 "traddr": "10.0.0.1", 00:19:07.992 "trsvcid": "60582" 00:19:07.992 }, 00:19:07.992 "auth": { 00:19:07.992 "state": "completed", 00:19:07.992 "digest": "sha384", 00:19:07.992 "dhgroup": "ffdhe6144" 00:19:07.992 } 00:19:07.992 } 00:19:07.992 ]' 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.251 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:19:09.185 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.185 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:09.186 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.186 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.186 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.186 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.186 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.186 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.186 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.444 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.011 00:19:10.011 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.011 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.011 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.268 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.268 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.268 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.268 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.268 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.268 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.268 { 00:19:10.268 "cntlid": 89, 00:19:10.268 "qid": 0, 00:19:10.268 "state": "enabled", 00:19:10.268 "thread": "nvmf_tgt_poll_group_000", 00:19:10.268 "listen_address": { 00:19:10.268 "trtype": "TCP", 00:19:10.268 "adrfam": "IPv4", 00:19:10.268 "traddr": "10.0.0.2", 00:19:10.268 "trsvcid": "4420" 00:19:10.268 }, 00:19:10.268 "peer_address": { 00:19:10.268 "trtype": "TCP", 00:19:10.268 "adrfam": "IPv4", 00:19:10.268 "traddr": "10.0.0.1", 00:19:10.268 "trsvcid": "60612" 00:19:10.268 }, 00:19:10.268 "auth": { 00:19:10.268 "state": "completed", 00:19:10.268 "digest": "sha384", 00:19:10.268 "dhgroup": "ffdhe8192" 00:19:10.268 } 00:19:10.268 } 00:19:10.268 ]' 00:19:10.268 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.526 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.526 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.526 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.526 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.526 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.526 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.526 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.784 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:19:11.718 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.718 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:11.718 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.718 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.719 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.719 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.719 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.719 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.976 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:11.976 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.976 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.976 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:11.976 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.977 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.977 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.977 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.977 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.977 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.977 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.977 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.543 00:19:12.543 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.543 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.543 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.801 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.801 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.801 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.801 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.801 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.801 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.801 { 00:19:12.801 "cntlid": 91, 00:19:12.801 "qid": 0, 00:19:12.801 "state": "enabled", 00:19:12.801 "thread": "nvmf_tgt_poll_group_000", 00:19:12.801 "listen_address": { 00:19:12.801 "trtype": "TCP", 00:19:12.801 "adrfam": "IPv4", 00:19:12.801 "traddr": "10.0.0.2", 00:19:12.801 "trsvcid": "4420" 00:19:12.801 }, 00:19:12.801 "peer_address": { 00:19:12.801 "trtype": "TCP", 00:19:12.801 "adrfam": "IPv4", 00:19:12.801 "traddr": "10.0.0.1", 00:19:12.801 "trsvcid": "60646" 00:19:12.801 }, 00:19:12.801 "auth": { 00:19:12.801 "state": "completed", 00:19:12.801 "digest": "sha384", 00:19:12.802 "dhgroup": "ffdhe8192" 00:19:12.802 } 00:19:12.802 } 00:19:12.802 ]' 00:19:12.802 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.060 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.060 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.060 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.060 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.060 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.060 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.060 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.318 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.252 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.188 00:19:15.188 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.188 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.188 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.446 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.446 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.446 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.446 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.446 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.446 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.446 { 00:19:15.446 "cntlid": 93, 00:19:15.446 "qid": 0, 00:19:15.446 "state": "enabled", 00:19:15.446 "thread": "nvmf_tgt_poll_group_000", 00:19:15.446 "listen_address": { 00:19:15.446 "trtype": "TCP", 00:19:15.446 "adrfam": "IPv4", 00:19:15.446 "traddr": "10.0.0.2", 00:19:15.446 "trsvcid": "4420" 00:19:15.446 }, 00:19:15.446 "peer_address": { 00:19:15.446 "trtype": "TCP", 00:19:15.446 "adrfam": "IPv4", 00:19:15.446 "traddr": "10.0.0.1", 00:19:15.446 "trsvcid": "60666" 00:19:15.446 }, 00:19:15.446 "auth": { 00:19:15.446 "state": "completed", 00:19:15.446 "digest": "sha384", 00:19:15.447 "dhgroup": "ffdhe8192" 00:19:15.447 } 00:19:15.447 } 00:19:15.447 ]' 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.447 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.705 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.902 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.469 00:19:17.469 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.469 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.469 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.727 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.727 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.727 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.727 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.727 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.727 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.727 { 00:19:17.727 "cntlid": 95, 00:19:17.727 "qid": 0, 00:19:17.727 "state": "enabled", 00:19:17.727 "thread": "nvmf_tgt_poll_group_000", 00:19:17.727 "listen_address": { 00:19:17.727 "trtype": "TCP", 00:19:17.727 "adrfam": "IPv4", 00:19:17.727 "traddr": "10.0.0.2", 00:19:17.727 "trsvcid": "4420" 00:19:17.727 }, 00:19:17.727 "peer_address": { 00:19:17.727 "trtype": "TCP", 00:19:17.727 "adrfam": "IPv4", 00:19:17.727 "traddr": "10.0.0.1", 00:19:17.727 "trsvcid": "43380" 00:19:17.727 }, 00:19:17.727 "auth": { 00:19:17.727 "state": "completed", 00:19:17.727 "digest": "sha384", 00:19:17.727 "dhgroup": "ffdhe8192" 00:19:17.727 } 00:19:17.727 } 00:19:17.727 ]' 00:19:17.727 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.986 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.986 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.986 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.986 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.986 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.986 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.986 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.244 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.618 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.876 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:19.876 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.877 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.135 00:19:20.135 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.135 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.135 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.761 { 00:19:20.761 "cntlid": 97, 00:19:20.761 "qid": 0, 00:19:20.761 "state": "enabled", 00:19:20.761 "thread": "nvmf_tgt_poll_group_000", 00:19:20.761 "listen_address": { 00:19:20.761 "trtype": "TCP", 00:19:20.761 "adrfam": "IPv4", 00:19:20.761 "traddr": "10.0.0.2", 00:19:20.761 "trsvcid": "4420" 00:19:20.761 }, 00:19:20.761 "peer_address": { 00:19:20.761 "trtype": "TCP", 00:19:20.761 "adrfam": "IPv4", 00:19:20.761 "traddr": "10.0.0.1", 00:19:20.761 "trsvcid": "43406" 00:19:20.761 }, 00:19:20.761 "auth": { 00:19:20.761 "state": "completed", 00:19:20.761 "digest": "sha512", 00:19:20.761 "dhgroup": "null" 00:19:20.761 } 00:19:20.761 } 00:19:20.761 ]' 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.761 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.328 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:19:22.263 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.264 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:22.264 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.264 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.264 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.264 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.264 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.264 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.522 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.087 00:19:23.087 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.087 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.087 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.346 { 00:19:23.346 "cntlid": 99, 00:19:23.346 "qid": 0, 00:19:23.346 "state": "enabled", 00:19:23.346 "thread": "nvmf_tgt_poll_group_000", 00:19:23.346 "listen_address": { 00:19:23.346 "trtype": "TCP", 00:19:23.346 "adrfam": "IPv4", 00:19:23.346 "traddr": "10.0.0.2", 00:19:23.346 "trsvcid": "4420" 00:19:23.346 }, 00:19:23.346 "peer_address": { 00:19:23.346 "trtype": "TCP", 00:19:23.346 "adrfam": "IPv4", 00:19:23.346 "traddr": "10.0.0.1", 00:19:23.346 "trsvcid": "43436" 00:19:23.346 }, 00:19:23.346 "auth": { 00:19:23.346 "state": "completed", 00:19:23.346 "digest": "sha512", 00:19:23.346 "dhgroup": "null" 00:19:23.346 } 00:19:23.346 } 00:19:23.346 ]' 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.346 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.604 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:23.604 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.604 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.604 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.604 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.170 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.107 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.107 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.107 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.107 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.107 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.107 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.365 00:19:25.624 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.624 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.624 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.883 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.883 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.883 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.883 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.883 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.883 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.883 { 00:19:25.883 "cntlid": 101, 00:19:25.883 "qid": 0, 00:19:25.883 "state": "enabled", 00:19:25.883 "thread": "nvmf_tgt_poll_group_000", 00:19:25.883 "listen_address": { 00:19:25.883 "trtype": "TCP", 00:19:25.883 "adrfam": "IPv4", 00:19:25.883 "traddr": "10.0.0.2", 00:19:25.883 "trsvcid": "4420" 00:19:25.883 }, 00:19:25.883 "peer_address": { 00:19:25.883 "trtype": "TCP", 00:19:25.883 "adrfam": "IPv4", 00:19:25.883 "traddr": "10.0.0.1", 00:19:25.883 "trsvcid": "43470" 00:19:25.883 }, 00:19:25.883 "auth": { 00:19:25.883 "state": "completed", 00:19:25.883 "digest": "sha512", 00:19:25.883 "dhgroup": "null" 00:19:25.883 } 00:19:25.883 } 00:19:25.883 ]' 00:19:25.883 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.883 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.883 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.883 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:25.883 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.883 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.883 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.883 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.141 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.076 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.334 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:27.334 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.334 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.334 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:27.334 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.334 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.335 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:27.335 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.335 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.335 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.335 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.593 00:19:27.593 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.593 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.593 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.158 { 00:19:28.158 "cntlid": 103, 00:19:28.158 "qid": 0, 00:19:28.158 "state": "enabled", 00:19:28.158 "thread": "nvmf_tgt_poll_group_000", 00:19:28.158 "listen_address": { 00:19:28.158 "trtype": "TCP", 00:19:28.158 "adrfam": "IPv4", 00:19:28.158 "traddr": "10.0.0.2", 00:19:28.158 "trsvcid": "4420" 00:19:28.158 }, 00:19:28.158 "peer_address": { 00:19:28.158 "trtype": "TCP", 00:19:28.158 "adrfam": "IPv4", 00:19:28.158 "traddr": "10.0.0.1", 00:19:28.158 "trsvcid": "42622" 00:19:28.158 }, 00:19:28.158 "auth": { 00:19:28.158 "state": "completed", 00:19:28.158 "digest": "sha512", 00:19:28.158 "dhgroup": "null" 00:19:28.158 } 00:19:28.158 } 00:19:28.158 ]' 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.158 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.725 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.660 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.918 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.485 00:19:30.485 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.485 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.485 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.744 { 00:19:30.744 "cntlid": 105, 00:19:30.744 "qid": 0, 00:19:30.744 "state": "enabled", 00:19:30.744 "thread": "nvmf_tgt_poll_group_000", 00:19:30.744 "listen_address": { 00:19:30.744 "trtype": "TCP", 00:19:30.744 "adrfam": "IPv4", 00:19:30.744 "traddr": "10.0.0.2", 00:19:30.744 "trsvcid": "4420" 00:19:30.744 }, 00:19:30.744 "peer_address": { 00:19:30.744 "trtype": "TCP", 00:19:30.744 "adrfam": "IPv4", 00:19:30.744 "traddr": "10.0.0.1", 00:19:30.744 "trsvcid": "42654" 00:19:30.744 }, 00:19:30.744 "auth": { 00:19:30.744 "state": "completed", 00:19:30.744 "digest": "sha512", 00:19:30.744 "dhgroup": "ffdhe2048" 00:19:30.744 } 00:19:30.744 } 00:19:30.744 ]' 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.744 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.744 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.744 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.003 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.003 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.003 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.571 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:19:32.146 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.146 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:32.146 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.146 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.147 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.147 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.147 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.147 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.724 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.983 00:19:32.983 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.983 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.983 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.242 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.242 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.242 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.242 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.242 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.242 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.242 { 00:19:33.242 "cntlid": 107, 00:19:33.242 "qid": 0, 00:19:33.242 "state": "enabled", 00:19:33.242 "thread": "nvmf_tgt_poll_group_000", 00:19:33.242 "listen_address": { 00:19:33.242 "trtype": "TCP", 00:19:33.242 "adrfam": "IPv4", 00:19:33.242 "traddr": "10.0.0.2", 00:19:33.242 "trsvcid": "4420" 00:19:33.242 }, 00:19:33.242 "peer_address": { 00:19:33.242 "trtype": "TCP", 00:19:33.242 "adrfam": "IPv4", 00:19:33.242 "traddr": "10.0.0.1", 00:19:33.242 "trsvcid": "42680" 00:19:33.242 }, 00:19:33.242 "auth": { 00:19:33.242 "state": "completed", 00:19:33.242 "digest": "sha512", 00:19:33.242 "dhgroup": "ffdhe2048" 00:19:33.242 } 00:19:33.242 } 00:19:33.242 ]' 00:19:33.242 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.500 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.500 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.500 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.500 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.500 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.500 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.500 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.758 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.693 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.694 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.260 00:19:35.260 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.260 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.260 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.519 { 00:19:35.519 "cntlid": 109, 00:19:35.519 "qid": 0, 00:19:35.519 "state": "enabled", 00:19:35.519 "thread": "nvmf_tgt_poll_group_000", 00:19:35.519 "listen_address": { 00:19:35.519 "trtype": "TCP", 00:19:35.519 "adrfam": "IPv4", 00:19:35.519 "traddr": "10.0.0.2", 00:19:35.519 "trsvcid": "4420" 00:19:35.519 }, 00:19:35.519 "peer_address": { 00:19:35.519 "trtype": "TCP", 00:19:35.519 "adrfam": "IPv4", 00:19:35.519 "traddr": "10.0.0.1", 00:19:35.519 "trsvcid": "42722" 00:19:35.519 }, 00:19:35.519 "auth": { 00:19:35.519 "state": "completed", 00:19:35.519 "digest": "sha512", 00:19:35.519 "dhgroup": "ffdhe2048" 00:19:35.519 } 00:19:35.519 } 00:19:35.519 ]' 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.519 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.778 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.715 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.974 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.232 00:19:37.232 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.232 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.232 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.490 { 00:19:37.490 "cntlid": 111, 00:19:37.490 "qid": 0, 00:19:37.490 "state": "enabled", 00:19:37.490 "thread": "nvmf_tgt_poll_group_000", 00:19:37.490 "listen_address": { 00:19:37.490 "trtype": "TCP", 00:19:37.490 "adrfam": "IPv4", 00:19:37.490 "traddr": "10.0.0.2", 00:19:37.490 "trsvcid": "4420" 00:19:37.490 }, 00:19:37.490 "peer_address": { 00:19:37.490 "trtype": "TCP", 00:19:37.490 "adrfam": "IPv4", 00:19:37.490 "traddr": "10.0.0.1", 00:19:37.490 "trsvcid": "41732" 00:19:37.490 }, 00:19:37.490 "auth": { 00:19:37.490 "state": "completed", 00:19:37.490 "digest": "sha512", 00:19:37.490 "dhgroup": "ffdhe2048" 00:19:37.490 } 00:19:37.490 } 00:19:37.490 ]' 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.490 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.749 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.749 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.749 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.007 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:19:38.573 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.831 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.090 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.348 00:19:39.348 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.348 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.348 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.607 { 00:19:39.607 "cntlid": 113, 00:19:39.607 "qid": 0, 00:19:39.607 "state": "enabled", 00:19:39.607 "thread": "nvmf_tgt_poll_group_000", 00:19:39.607 "listen_address": { 00:19:39.607 "trtype": "TCP", 00:19:39.607 "adrfam": "IPv4", 00:19:39.607 "traddr": "10.0.0.2", 00:19:39.607 "trsvcid": "4420" 00:19:39.607 }, 00:19:39.607 "peer_address": { 00:19:39.607 "trtype": "TCP", 00:19:39.607 "adrfam": "IPv4", 00:19:39.607 "traddr": "10.0.0.1", 00:19:39.607 "trsvcid": "41768" 00:19:39.607 }, 00:19:39.607 "auth": { 00:19:39.607 "state": "completed", 00:19:39.607 "digest": "sha512", 00:19:39.607 "dhgroup": "ffdhe3072" 00:19:39.607 } 00:19:39.607 } 00:19:39.607 ]' 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.607 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.865 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.865 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.865 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.124 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:19:40.690 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.690 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:40.690 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.690 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.949 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.949 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.949 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.949 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.949 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.208 00:19:41.466 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.466 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.466 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.725 { 00:19:41.725 "cntlid": 115, 00:19:41.725 "qid": 0, 00:19:41.725 "state": "enabled", 00:19:41.725 "thread": "nvmf_tgt_poll_group_000", 00:19:41.725 "listen_address": { 00:19:41.725 "trtype": "TCP", 00:19:41.725 "adrfam": "IPv4", 00:19:41.725 "traddr": "10.0.0.2", 00:19:41.725 "trsvcid": "4420" 00:19:41.725 }, 00:19:41.725 "peer_address": { 00:19:41.725 "trtype": "TCP", 00:19:41.725 "adrfam": "IPv4", 00:19:41.725 "traddr": "10.0.0.1", 00:19:41.725 "trsvcid": "41788" 00:19:41.725 }, 00:19:41.725 "auth": { 00:19:41.725 "state": "completed", 00:19:41.725 "digest": "sha512", 00:19:41.725 "dhgroup": "ffdhe3072" 00:19:41.725 } 00:19:41.725 } 00:19:41.725 ]' 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.725 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.983 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:19:42.918 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.918 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:42.918 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.918 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.918 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.918 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.918 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.919 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.919 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.177 00:19:43.177 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.177 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.177 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.436 { 00:19:43.436 "cntlid": 117, 00:19:43.436 "qid": 0, 00:19:43.436 "state": "enabled", 00:19:43.436 "thread": "nvmf_tgt_poll_group_000", 00:19:43.436 "listen_address": { 00:19:43.436 "trtype": "TCP", 00:19:43.436 "adrfam": "IPv4", 00:19:43.436 "traddr": "10.0.0.2", 00:19:43.436 "trsvcid": "4420" 00:19:43.436 }, 00:19:43.436 "peer_address": { 00:19:43.436 "trtype": "TCP", 00:19:43.436 "adrfam": "IPv4", 00:19:43.436 "traddr": "10.0.0.1", 00:19:43.436 "trsvcid": "41812" 00:19:43.436 }, 00:19:43.436 "auth": { 00:19:43.436 "state": "completed", 00:19:43.436 "digest": "sha512", 00:19:43.436 "dhgroup": "ffdhe3072" 00:19:43.436 } 00:19:43.436 } 00:19:43.436 ]' 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.436 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.694 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.694 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.694 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.694 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.694 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.953 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.924 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.924 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.925 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.925 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.925 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.493 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.493 { 00:19:45.493 "cntlid": 119, 00:19:45.493 "qid": 0, 00:19:45.493 "state": "enabled", 00:19:45.493 "thread": "nvmf_tgt_poll_group_000", 00:19:45.493 "listen_address": { 00:19:45.493 "trtype": "TCP", 00:19:45.493 "adrfam": "IPv4", 00:19:45.493 "traddr": "10.0.0.2", 00:19:45.493 "trsvcid": "4420" 00:19:45.493 }, 00:19:45.493 "peer_address": { 00:19:45.493 "trtype": "TCP", 00:19:45.493 "adrfam": "IPv4", 00:19:45.493 "traddr": "10.0.0.1", 00:19:45.493 "trsvcid": "41850" 00:19:45.493 }, 00:19:45.493 "auth": { 00:19:45.493 "state": "completed", 00:19:45.493 "digest": "sha512", 00:19:45.493 "dhgroup": "ffdhe3072" 00:19:45.493 } 00:19:45.493 } 00:19:45.493 ]' 00:19:45.493 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.751 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.751 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.751 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.751 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.751 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.751 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.751 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.009 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:19:46.944 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.944 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:46.944 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.944 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.944 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.944 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.944 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.944 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.944 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.944 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.203 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.462 00:19:47.462 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.462 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.462 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.720 { 00:19:47.720 "cntlid": 121, 00:19:47.720 "qid": 0, 00:19:47.720 "state": "enabled", 00:19:47.720 "thread": "nvmf_tgt_poll_group_000", 00:19:47.720 "listen_address": { 00:19:47.720 "trtype": "TCP", 00:19:47.720 "adrfam": "IPv4", 00:19:47.720 "traddr": "10.0.0.2", 00:19:47.720 "trsvcid": "4420" 00:19:47.720 }, 00:19:47.720 "peer_address": { 00:19:47.720 "trtype": "TCP", 00:19:47.720 "adrfam": "IPv4", 00:19:47.720 "traddr": "10.0.0.1", 00:19:47.720 "trsvcid": "45440" 00:19:47.720 }, 00:19:47.720 "auth": { 00:19:47.720 "state": "completed", 00:19:47.720 "digest": "sha512", 00:19:47.720 "dhgroup": "ffdhe4096" 00:19:47.720 } 00:19:47.720 } 00:19:47.720 ]' 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.720 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.978 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.978 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.978 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.237 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.174 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.433 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.692 00:19:49.692 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.692 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.692 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.951 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.951 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.951 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.951 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.951 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.951 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.951 { 00:19:49.951 "cntlid": 123, 00:19:49.951 "qid": 0, 00:19:49.951 "state": "enabled", 00:19:49.951 "thread": "nvmf_tgt_poll_group_000", 00:19:49.951 "listen_address": { 00:19:49.951 "trtype": "TCP", 00:19:49.951 "adrfam": "IPv4", 00:19:49.951 "traddr": "10.0.0.2", 00:19:49.951 "trsvcid": "4420" 00:19:49.951 }, 00:19:49.951 "peer_address": { 00:19:49.951 "trtype": "TCP", 00:19:49.951 "adrfam": "IPv4", 00:19:49.951 "traddr": "10.0.0.1", 00:19:49.951 "trsvcid": "45468" 00:19:49.951 }, 00:19:49.951 "auth": { 00:19:49.951 "state": "completed", 00:19:49.951 "digest": "sha512", 00:19:49.952 "dhgroup": "ffdhe4096" 00:19:49.952 } 00:19:49.952 } 00:19:49.952 ]' 00:19:49.952 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.210 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.210 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.210 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.210 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.210 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.210 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.210 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.470 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.406 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.665 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.924 00:19:51.924 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.924 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.924 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.183 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.183 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.183 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.183 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.183 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.183 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.183 { 00:19:52.183 "cntlid": 125, 00:19:52.183 "qid": 0, 00:19:52.183 "state": "enabled", 00:19:52.183 "thread": "nvmf_tgt_poll_group_000", 00:19:52.183 "listen_address": { 00:19:52.183 "trtype": "TCP", 00:19:52.183 "adrfam": "IPv4", 00:19:52.183 "traddr": "10.0.0.2", 00:19:52.183 "trsvcid": "4420" 00:19:52.183 }, 00:19:52.183 "peer_address": { 00:19:52.183 "trtype": "TCP", 00:19:52.183 "adrfam": "IPv4", 00:19:52.183 "traddr": "10.0.0.1", 00:19:52.183 "trsvcid": "45512" 00:19:52.183 }, 00:19:52.183 "auth": { 00:19:52.183 "state": "completed", 00:19:52.183 "digest": "sha512", 00:19:52.183 "dhgroup": "ffdhe4096" 00:19:52.183 } 00:19:52.183 } 00:19:52.183 ]' 00:19:52.183 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.441 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.441 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.441 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.441 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.441 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.441 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.441 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.700 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:53.635 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:53.893 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:53.893 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.893 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.893 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:53.893 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.894 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.894 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:53.894 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.894 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.894 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.894 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.894 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.460 00:19:54.460 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.460 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.460 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.460 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.718 { 00:19:54.718 "cntlid": 127, 00:19:54.718 "qid": 0, 00:19:54.718 "state": "enabled", 00:19:54.718 "thread": "nvmf_tgt_poll_group_000", 00:19:54.718 "listen_address": { 00:19:54.718 "trtype": "TCP", 00:19:54.718 "adrfam": "IPv4", 00:19:54.718 "traddr": "10.0.0.2", 00:19:54.718 "trsvcid": "4420" 00:19:54.718 }, 00:19:54.718 "peer_address": { 00:19:54.718 "trtype": "TCP", 00:19:54.718 "adrfam": "IPv4", 00:19:54.718 "traddr": "10.0.0.1", 00:19:54.718 "trsvcid": "45534" 00:19:54.718 }, 00:19:54.718 "auth": { 00:19:54.718 "state": "completed", 00:19:54.718 "digest": "sha512", 00:19:54.718 "dhgroup": "ffdhe4096" 00:19:54.718 } 00:19:54.718 } 00:19:54.718 ]' 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.718 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.976 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:55.913 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:56.171 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:56.171 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.171 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.171 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.171 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.172 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.172 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.172 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.172 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.172 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.172 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.172 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.737 00:19:56.737 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.737 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.737 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.996 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.996 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.996 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.996 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.996 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.996 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.996 { 00:19:56.996 "cntlid": 129, 00:19:56.996 "qid": 0, 00:19:56.996 "state": "enabled", 00:19:56.996 "thread": "nvmf_tgt_poll_group_000", 00:19:56.996 "listen_address": { 00:19:56.996 "trtype": "TCP", 00:19:56.996 "adrfam": "IPv4", 00:19:56.996 "traddr": "10.0.0.2", 00:19:56.996 "trsvcid": "4420" 00:19:56.996 }, 00:19:56.996 "peer_address": { 00:19:56.996 "trtype": "TCP", 00:19:56.996 "adrfam": "IPv4", 00:19:56.996 "traddr": "10.0.0.1", 00:19:56.996 "trsvcid": "45570" 00:19:56.996 }, 00:19:56.996 "auth": { 00:19:56.996 "state": "completed", 00:19:56.996 "digest": "sha512", 00:19:56.996 "dhgroup": "ffdhe6144" 00:19:56.996 } 00:19:56.996 } 00:19:56.996 ]' 00:19:56.996 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.258 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.258 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.258 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.258 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.258 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.258 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.258 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.517 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.452 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.020 00:19:59.020 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.020 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.020 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.279 { 00:19:59.279 "cntlid": 131, 00:19:59.279 "qid": 0, 00:19:59.279 "state": "enabled", 00:19:59.279 "thread": "nvmf_tgt_poll_group_000", 00:19:59.279 "listen_address": { 00:19:59.279 "trtype": "TCP", 00:19:59.279 "adrfam": "IPv4", 00:19:59.279 "traddr": "10.0.0.2", 00:19:59.279 "trsvcid": "4420" 00:19:59.279 }, 00:19:59.279 "peer_address": { 00:19:59.279 "trtype": "TCP", 00:19:59.279 "adrfam": "IPv4", 00:19:59.279 "traddr": "10.0.0.1", 00:19:59.279 "trsvcid": "33578" 00:19:59.279 }, 00:19:59.279 "auth": { 00:19:59.279 "state": "completed", 00:19:59.279 "digest": "sha512", 00:19:59.279 "dhgroup": "ffdhe6144" 00:19:59.279 } 00:19:59.279 } 00:19:59.279 ]' 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.279 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.537 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.537 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.537 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.537 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.537 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.103 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:01.038 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.298 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.866 00:20:01.866 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.866 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.866 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.124 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.124 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.124 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.124 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.124 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.124 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.124 { 00:20:02.124 "cntlid": 133, 00:20:02.124 "qid": 0, 00:20:02.124 "state": "enabled", 00:20:02.124 "thread": "nvmf_tgt_poll_group_000", 00:20:02.124 "listen_address": { 00:20:02.124 "trtype": "TCP", 00:20:02.124 "adrfam": "IPv4", 00:20:02.124 "traddr": "10.0.0.2", 00:20:02.124 "trsvcid": "4420" 00:20:02.124 }, 00:20:02.124 "peer_address": { 00:20:02.124 "trtype": "TCP", 00:20:02.124 "adrfam": "IPv4", 00:20:02.124 "traddr": "10.0.0.1", 00:20:02.124 "trsvcid": "33596" 00:20:02.124 }, 00:20:02.124 "auth": { 00:20:02.124 "state": "completed", 00:20:02.124 "digest": "sha512", 00:20:02.124 "dhgroup": "ffdhe6144" 00:20:02.124 } 00:20:02.124 } 00:20:02.124 ]' 00:20:02.124 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.383 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.383 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.383 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.383 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.383 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.383 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.383 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.641 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.576 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.142 00:20:04.142 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.142 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.142 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.400 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.400 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.400 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.400 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.400 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.400 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.400 { 00:20:04.400 "cntlid": 135, 00:20:04.400 "qid": 0, 00:20:04.400 "state": "enabled", 00:20:04.400 "thread": "nvmf_tgt_poll_group_000", 00:20:04.401 "listen_address": { 00:20:04.401 "trtype": "TCP", 00:20:04.401 "adrfam": "IPv4", 00:20:04.401 "traddr": "10.0.0.2", 00:20:04.401 "trsvcid": "4420" 00:20:04.401 }, 00:20:04.401 "peer_address": { 00:20:04.401 "trtype": "TCP", 00:20:04.401 "adrfam": "IPv4", 00:20:04.401 "traddr": "10.0.0.1", 00:20:04.401 "trsvcid": "33640" 00:20:04.401 }, 00:20:04.401 "auth": { 00:20:04.401 "state": "completed", 00:20:04.401 "digest": "sha512", 00:20:04.401 "dhgroup": "ffdhe6144" 00:20:04.401 } 00:20:04.401 } 00:20:04.401 ]' 00:20:04.401 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.401 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.401 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.401 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.401 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.659 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.659 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.659 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.918 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.855 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.855 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.791 00:20:06.791 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.791 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.791 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.791 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.791 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.791 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.791 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.791 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.791 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.791 { 00:20:06.791 "cntlid": 137, 00:20:06.791 "qid": 0, 00:20:06.791 "state": "enabled", 00:20:06.791 "thread": "nvmf_tgt_poll_group_000", 00:20:06.791 "listen_address": { 00:20:06.791 "trtype": "TCP", 00:20:06.791 "adrfam": "IPv4", 00:20:06.791 "traddr": "10.0.0.2", 00:20:06.791 "trsvcid": "4420" 00:20:06.791 }, 00:20:06.791 "peer_address": { 00:20:06.791 "trtype": "TCP", 00:20:06.791 "adrfam": "IPv4", 00:20:06.791 "traddr": "10.0.0.1", 00:20:06.791 "trsvcid": "33676" 00:20:06.791 }, 00:20:06.791 "auth": { 00:20:06.791 "state": "completed", 00:20:06.791 "digest": "sha512", 00:20:06.791 "dhgroup": "ffdhe8192" 00:20:06.791 } 00:20:06.791 } 00:20:06.791 ]' 00:20:06.791 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.050 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.050 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.050 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.050 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.050 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.050 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.050 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.308 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:08.244 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.503 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.072 00:20:09.072 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.072 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.072 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.330 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.330 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.330 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.330 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.330 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.330 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.330 { 00:20:09.330 "cntlid": 139, 00:20:09.330 "qid": 0, 00:20:09.330 "state": "enabled", 00:20:09.330 "thread": "nvmf_tgt_poll_group_000", 00:20:09.330 "listen_address": { 00:20:09.330 "trtype": "TCP", 00:20:09.330 "adrfam": "IPv4", 00:20:09.330 "traddr": "10.0.0.2", 00:20:09.330 "trsvcid": "4420" 00:20:09.330 }, 00:20:09.330 "peer_address": { 00:20:09.330 "trtype": "TCP", 00:20:09.330 "adrfam": "IPv4", 00:20:09.330 "traddr": "10.0.0.1", 00:20:09.330 "trsvcid": "46240" 00:20:09.330 }, 00:20:09.330 "auth": { 00:20:09.330 "state": "completed", 00:20:09.330 "digest": "sha512", 00:20:09.330 "dhgroup": "ffdhe8192" 00:20:09.330 } 00:20:09.330 } 00:20:09.330 ]' 00:20:09.330 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.589 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.589 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.589 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.589 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.589 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.589 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.589 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.847 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjUyNTc1OGM2YTQxYjljZDc5MDI2ZTIzNzc5YzA0N2Rf6r+A: --dhchap-ctrl-secret DHHC-1:02:ZDQ2YWUwOTk2ZTEzN2U5OTI0NjNjZGE3NDEyYzNiNzc2OWI5ZTg3NGVjN2ZkZjgygpqg3A==: 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.785 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.785 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.785 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.785 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.722 00:20:11.722 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.722 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.722 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.722 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.722 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.722 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.722 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.722 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.980 { 00:20:11.980 "cntlid": 141, 00:20:11.980 "qid": 0, 00:20:11.980 "state": "enabled", 00:20:11.980 "thread": "nvmf_tgt_poll_group_000", 00:20:11.980 "listen_address": { 00:20:11.980 "trtype": "TCP", 00:20:11.980 "adrfam": "IPv4", 00:20:11.980 "traddr": "10.0.0.2", 00:20:11.980 "trsvcid": "4420" 00:20:11.980 }, 00:20:11.980 "peer_address": { 00:20:11.980 "trtype": "TCP", 00:20:11.980 "adrfam": "IPv4", 00:20:11.980 "traddr": "10.0.0.1", 00:20:11.980 "trsvcid": "46266" 00:20:11.980 }, 00:20:11.980 "auth": { 00:20:11.980 "state": "completed", 00:20:11.980 "digest": "sha512", 00:20:11.980 "dhgroup": "ffdhe8192" 00:20:11.980 } 00:20:11.980 } 00:20:11.980 ]' 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.980 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.239 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzQzYzY1ZDZkNDg5ODU1MjM4ZDY4YzlmMjA1NDliMDUwNDAxMzM0MWI2YmQ4OTY0lb94fA==: --dhchap-ctrl-secret DHHC-1:01:ZmUyNTUyNTM1ZGViOTY1YjVkNmZhNjg3NzEzYTg5YWEhAp72: 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.614 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.550 00:20:14.550 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.550 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.550 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.809 { 00:20:14.809 "cntlid": 143, 00:20:14.809 "qid": 0, 00:20:14.809 "state": "enabled", 00:20:14.809 "thread": "nvmf_tgt_poll_group_000", 00:20:14.809 "listen_address": { 00:20:14.809 "trtype": "TCP", 00:20:14.809 "adrfam": "IPv4", 00:20:14.809 "traddr": "10.0.0.2", 00:20:14.809 "trsvcid": "4420" 00:20:14.809 }, 00:20:14.809 "peer_address": { 00:20:14.809 "trtype": "TCP", 00:20:14.809 "adrfam": "IPv4", 00:20:14.809 "traddr": "10.0.0.1", 00:20:14.809 "trsvcid": "46284" 00:20:14.809 }, 00:20:14.809 "auth": { 00:20:14.809 "state": "completed", 00:20:14.809 "digest": "sha512", 00:20:14.809 "dhgroup": "ffdhe8192" 00:20:14.809 } 00:20:14.809 } 00:20:14.809 ]' 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.809 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.809 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.809 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.809 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.809 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.809 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.068 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.003 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.261 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.261 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.261 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.261 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.261 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.261 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.827 00:20:16.827 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.827 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.827 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.394 { 00:20:17.394 "cntlid": 145, 00:20:17.394 "qid": 0, 00:20:17.394 "state": "enabled", 00:20:17.394 "thread": "nvmf_tgt_poll_group_000", 00:20:17.394 "listen_address": { 00:20:17.394 "trtype": "TCP", 00:20:17.394 "adrfam": "IPv4", 00:20:17.394 "traddr": "10.0.0.2", 00:20:17.394 "trsvcid": "4420" 00:20:17.394 }, 00:20:17.394 "peer_address": { 00:20:17.394 "trtype": "TCP", 00:20:17.394 "adrfam": "IPv4", 00:20:17.394 "traddr": "10.0.0.1", 00:20:17.394 "trsvcid": "46312" 00:20:17.394 }, 00:20:17.394 "auth": { 00:20:17.394 "state": "completed", 00:20:17.394 "digest": "sha512", 00:20:17.394 "dhgroup": "ffdhe8192" 00:20:17.394 } 00:20:17.394 } 00:20:17.394 ]' 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.394 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.653 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.653 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.653 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.653 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.653 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.222 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDEzMzVkNTQyMTM3Njg4MGQyZWU1ODljY2EwNWQzNWM3NGVmZmQwYmZmZWMwNzBmzUlBSg==: --dhchap-ctrl-secret DHHC-1:03:MGRkN2VkNGRjNmRmYTU4MmU3MDYxY2U2ZTM4ZGFmMjk5ZDczMzZhZmUwYWNjMTg3MzAzZGE5YmYwZjdmMTA4YiGxAyo=: 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.789 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:19.724 request: 00:20:19.724 { 00:20:19.724 "name": "nvme0", 00:20:19.724 "trtype": "tcp", 00:20:19.724 "traddr": "10.0.0.2", 00:20:19.724 "adrfam": "ipv4", 00:20:19.724 "trsvcid": "4420", 00:20:19.724 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:19.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:19.724 "prchk_reftag": false, 00:20:19.724 "prchk_guard": false, 00:20:19.724 "hdgst": false, 00:20:19.724 "ddgst": false, 00:20:19.724 "dhchap_key": "key2", 00:20:19.724 "method": "bdev_nvme_attach_controller", 00:20:19.724 "req_id": 1 00:20:19.724 } 00:20:19.724 Got JSON-RPC error response 00:20:19.724 response: 00:20:19.724 { 00:20:19.724 "code": -5, 00:20:19.724 "message": "Input/output error" 00:20:19.724 } 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:19.724 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:20.659 request: 00:20:20.659 { 00:20:20.659 "name": "nvme0", 00:20:20.659 "trtype": "tcp", 00:20:20.659 "traddr": "10.0.0.2", 00:20:20.659 "adrfam": "ipv4", 00:20:20.659 "trsvcid": "4420", 00:20:20.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:20.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:20.659 "prchk_reftag": false, 00:20:20.659 "prchk_guard": false, 00:20:20.659 "hdgst": false, 00:20:20.659 "ddgst": false, 00:20:20.659 "dhchap_key": "key1", 00:20:20.659 "dhchap_ctrlr_key": "ckey2", 00:20:20.659 "method": "bdev_nvme_attach_controller", 00:20:20.659 "req_id": 1 00:20:20.659 } 00:20:20.659 Got JSON-RPC error response 00:20:20.659 response: 00:20:20.659 { 00:20:20.659 "code": -5, 00:20:20.659 "message": "Input/output error" 00:20:20.659 } 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.659 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.593 request: 00:20:21.593 { 00:20:21.593 "name": "nvme0", 00:20:21.593 "trtype": "tcp", 00:20:21.593 "traddr": "10.0.0.2", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "4420", 00:20:21.593 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:21.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:21.593 "prchk_reftag": false, 00:20:21.593 "prchk_guard": false, 00:20:21.593 "hdgst": false, 00:20:21.593 "ddgst": false, 00:20:21.593 "dhchap_key": "key1", 00:20:21.593 "dhchap_ctrlr_key": "ckey1", 00:20:21.593 "method": "bdev_nvme_attach_controller", 00:20:21.593 "req_id": 1 00:20:21.593 } 00:20:21.593 Got JSON-RPC error response 00:20:21.593 response: 00:20:21.593 { 00:20:21.593 "code": -5, 00:20:21.593 "message": "Input/output error" 00:20:21.593 } 00:20:21.593 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:21.593 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:21.593 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:21.593 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:21.593 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:21.593 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.593 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.594 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.594 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4123652 00:20:21.594 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 4123652 ']' 00:20:21.594 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 4123652 00:20:21.594 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:21.594 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.594 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4123652 00:20:21.852 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:21.852 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:21.852 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4123652' 00:20:21.852 killing process with pid 4123652 00:20:21.852 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 4123652 00:20:21.852 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 4123652 00:20:22.110 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:22.110 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:22.110 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4155760 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4155760 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 4155760 ']' 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.111 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4155760 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 4155760 ']' 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.677 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.935 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.904 00:20:23.904 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.904 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.904 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.162 { 00:20:24.162 "cntlid": 1, 00:20:24.162 "qid": 0, 00:20:24.162 "state": "enabled", 00:20:24.162 "thread": "nvmf_tgt_poll_group_000", 00:20:24.162 "listen_address": { 00:20:24.162 "trtype": "TCP", 00:20:24.162 "adrfam": "IPv4", 00:20:24.162 "traddr": "10.0.0.2", 00:20:24.162 "trsvcid": "4420" 00:20:24.162 }, 00:20:24.162 "peer_address": { 00:20:24.162 "trtype": "TCP", 00:20:24.162 "adrfam": "IPv4", 00:20:24.162 "traddr": "10.0.0.1", 00:20:24.162 "trsvcid": "41950" 00:20:24.162 }, 00:20:24.162 "auth": { 00:20:24.162 "state": "completed", 00:20:24.162 "digest": "sha512", 00:20:24.162 "dhgroup": "ffdhe8192" 00:20:24.162 } 00:20:24.162 } 00:20:24.162 ]' 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.162 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.420 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.420 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.420 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.420 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.420 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.678 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWExMDYxZmRjOTcyYmJiYTJiMDE2MjAwMjE3ODhhOTE0NzBkOTA3Y2UyOGFhODRjODIyNmNkYTJhNTA2YzkwOd2Asns=: 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:25.244 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:25.502 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.502 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:25.502 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.502 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:25.503 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.503 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:25.503 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.503 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.503 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.069 request: 00:20:26.069 { 00:20:26.069 "name": "nvme0", 00:20:26.069 "trtype": "tcp", 00:20:26.069 "traddr": "10.0.0.2", 00:20:26.069 "adrfam": "ipv4", 00:20:26.069 "trsvcid": "4420", 00:20:26.069 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:26.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:26.069 "prchk_reftag": false, 00:20:26.069 "prchk_guard": false, 00:20:26.069 "hdgst": false, 00:20:26.069 "ddgst": false, 00:20:26.069 "dhchap_key": "key3", 00:20:26.069 "method": "bdev_nvme_attach_controller", 00:20:26.069 "req_id": 1 00:20:26.069 } 00:20:26.069 Got JSON-RPC error response 00:20:26.069 response: 00:20:26.069 { 00:20:26.069 "code": -5, 00:20:26.069 "message": "Input/output error" 00:20:26.069 } 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:26.069 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.636 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.203 request: 00:20:27.203 { 00:20:27.203 "name": "nvme0", 00:20:27.203 "trtype": "tcp", 00:20:27.203 "traddr": "10.0.0.2", 00:20:27.203 "adrfam": "ipv4", 00:20:27.203 "trsvcid": "4420", 00:20:27.203 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:27.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:27.203 "prchk_reftag": false, 00:20:27.203 "prchk_guard": false, 00:20:27.203 "hdgst": false, 00:20:27.203 "ddgst": false, 00:20:27.203 "dhchap_key": "key3", 00:20:27.203 "method": "bdev_nvme_attach_controller", 00:20:27.203 "req_id": 1 00:20:27.203 } 00:20:27.203 Got JSON-RPC error response 00:20:27.203 response: 00:20:27.203 { 00:20:27.203 "code": -5, 00:20:27.203 "message": "Input/output error" 00:20:27.203 } 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:27.203 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:27.462 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:27.720 request: 00:20:27.720 { 00:20:27.720 "name": "nvme0", 00:20:27.720 "trtype": "tcp", 00:20:27.720 "traddr": "10.0.0.2", 00:20:27.720 "adrfam": "ipv4", 00:20:27.720 "trsvcid": "4420", 00:20:27.720 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:27.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:27.720 "prchk_reftag": false, 00:20:27.720 "prchk_guard": false, 00:20:27.720 "hdgst": false, 00:20:27.720 "ddgst": false, 00:20:27.720 "dhchap_key": "key0", 00:20:27.720 "dhchap_ctrlr_key": "key1", 00:20:27.720 "method": "bdev_nvme_attach_controller", 00:20:27.720 "req_id": 1 00:20:27.720 } 00:20:27.720 Got JSON-RPC error response 00:20:27.720 response: 00:20:27.720 { 00:20:27.720 "code": -5, 00:20:27.720 "message": "Input/output error" 00:20:27.720 } 00:20:27.720 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:27.720 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:27.720 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:27.720 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:27.720 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.720 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.978 00:20:27.978 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:27.978 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:27.978 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.235 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.235 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.235 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4123683 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 4123683 ']' 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 4123683 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4123683 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4123683' 00:20:28.802 killing process with pid 4123683 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 4123683 00:20:28.802 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 4123683 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.062 rmmod nvme_tcp 00:20:29.062 rmmod nvme_fabrics 00:20:29.062 rmmod nvme_keyring 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4155760 ']' 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4155760 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 4155760 ']' 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 4155760 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.062 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4155760 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4155760' 00:20:29.321 killing process with pid 4155760 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 4155760 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 4155760 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.321 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.855 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.855 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Zun /tmp/spdk.key-sha256.ePA /tmp/spdk.key-sha384.OWH /tmp/spdk.key-sha512.Irs /tmp/spdk.key-sha512.H9i /tmp/spdk.key-sha384.4Vg /tmp/spdk.key-sha256.Dml '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:31.855 00:20:31.855 real 3m9.286s 00:20:31.855 user 7m24.199s 00:20:31.855 sys 0m25.659s 00:20:31.855 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:31.855 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.855 ************************************ 00:20:31.855 END TEST nvmf_auth_target 00:20:31.856 ************************************ 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:31.856 ************************************ 00:20:31.856 START TEST nvmf_bdevio_no_huge 00:20:31.856 ************************************ 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:31.856 * Looking for test storage... 00:20:31.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.856 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:37.173 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:37.173 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:37.173 Found net devices under 0000:af:00.0: cvl_0_0 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.173 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:37.174 Found net devices under 0000:af:00.1: cvl_0_1 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.174 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:20:37.432 00:20:37.432 --- 10.0.0.2 ping statistics --- 00:20:37.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.432 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:20:37.432 00:20:37.432 --- 10.0.0.1 ping statistics --- 00:20:37.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.432 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.432 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4160767 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4160767 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 4160767 ']' 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.433 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.691 [2024-07-25 12:07:14.734417] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:20:37.691 [2024-07-25 12:07:14.734475] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:37.691 [2024-07-25 12:07:14.846500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.950 [2024-07-25 12:07:15.077789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.950 [2024-07-25 12:07:15.077847] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.950 [2024-07-25 12:07:15.077868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.950 [2024-07-25 12:07:15.077886] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.950 [2024-07-25 12:07:15.077902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.950 [2024-07-25 12:07:15.078067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.950 [2024-07-25 12:07:15.078181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:37.950 [2024-07-25 12:07:15.078296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:37.950 [2024-07-25 12:07:15.078302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.517 [2024-07-25 12:07:15.727442] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.517 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.518 Malloc0 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.518 [2024-07-25 12:07:15.780875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.518 { 00:20:38.518 "params": { 00:20:38.518 "name": "Nvme$subsystem", 00:20:38.518 "trtype": "$TEST_TRANSPORT", 00:20:38.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.518 "adrfam": "ipv4", 00:20:38.518 "trsvcid": "$NVMF_PORT", 00:20:38.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.518 "hdgst": ${hdgst:-false}, 00:20:38.518 "ddgst": ${ddgst:-false} 00:20:38.518 }, 00:20:38.518 "method": "bdev_nvme_attach_controller" 00:20:38.518 } 00:20:38.518 EOF 00:20:38.518 )") 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:38.518 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:38.518 "params": { 00:20:38.518 "name": "Nvme1", 00:20:38.518 "trtype": "tcp", 00:20:38.518 "traddr": "10.0.0.2", 00:20:38.518 "adrfam": "ipv4", 00:20:38.518 "trsvcid": "4420", 00:20:38.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.518 "hdgst": false, 00:20:38.518 "ddgst": false 00:20:38.518 }, 00:20:38.518 "method": "bdev_nvme_attach_controller" 00:20:38.518 }' 00:20:38.817 [2024-07-25 12:07:15.834585] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:20:38.817 [2024-07-25 12:07:15.834651] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4160877 ] 00:20:38.817 [2024-07-25 12:07:15.920694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:38.817 [2024-07-25 12:07:16.039240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.817 [2024-07-25 12:07:16.039354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.817 [2024-07-25 12:07:16.039354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.076 I/O targets: 00:20:39.076 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:39.076 00:20:39.076 00:20:39.076 CUnit - A unit testing framework for C - Version 2.1-3 00:20:39.076 http://cunit.sourceforge.net/ 00:20:39.076 00:20:39.076 00:20:39.076 Suite: bdevio tests on: Nvme1n1 00:20:39.076 Test: blockdev write read block ...passed 00:20:39.334 Test: blockdev write zeroes read block ...passed 00:20:39.334 Test: blockdev write zeroes read no split ...passed 00:20:39.334 Test: blockdev write zeroes read split ...passed 00:20:39.334 Test: blockdev write zeroes read split partial ...passed 00:20:39.334 Test: blockdev reset ...[2024-07-25 12:07:16.487248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.334 [2024-07-25 12:07:16.487325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x586520 (9): Bad file descriptor 00:20:39.334 [2024-07-25 12:07:16.549683] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:39.334 passed 00:20:39.334 Test: blockdev write read 8 blocks ...passed 00:20:39.334 Test: blockdev write read size > 128k ...passed 00:20:39.334 Test: blockdev write read invalid size ...passed 00:20:39.593 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:39.593 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:39.593 Test: blockdev write read max offset ...passed 00:20:39.593 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:39.593 Test: blockdev writev readv 8 blocks ...passed 00:20:39.593 Test: blockdev writev readv 30 x 1block ...passed 00:20:39.593 Test: blockdev writev readv block ...passed 00:20:39.593 Test: blockdev writev readv size > 128k ...passed 00:20:39.593 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:39.593 Test: blockdev comparev and writev ...[2024-07-25 12:07:16.769752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.769816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.769857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.769882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.770502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.770535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.770572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.770594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.771210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.771241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.771278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.771301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.771919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.771950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.771988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.593 [2024-07-25 12:07:16.772009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.593 passed 00:20:39.593 Test: blockdev nvme passthru rw ...passed 00:20:39.593 Test: blockdev nvme passthru vendor specific ...[2024-07-25 12:07:16.855250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.593 [2024-07-25 12:07:16.855289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.855584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.593 [2024-07-25 12:07:16.855623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.855908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.593 [2024-07-25 12:07:16.855937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.593 [2024-07-25 12:07:16.856218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.593 [2024-07-25 12:07:16.856247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.593 passed 00:20:39.593 Test: blockdev nvme admin passthru ...passed 00:20:39.851 Test: blockdev copy ...passed 00:20:39.851 00:20:39.851 Run Summary: Type Total Ran Passed Failed Inactive 00:20:39.851 suites 1 1 n/a 0 0 00:20:39.851 tests 23 23 23 0 0 00:20:39.851 asserts 152 152 152 0 n/a 00:20:39.851 00:20:39.851 Elapsed time = 1.170 seconds 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.108 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.109 rmmod nvme_tcp 00:20:40.109 rmmod nvme_fabrics 00:20:40.109 rmmod nvme_keyring 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4160767 ']' 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4160767 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 4160767 ']' 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 4160767 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4160767 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4160767' 00:20:40.109 killing process with pid 4160767 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 4160767 00:20:40.109 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 4160767 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.046 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.945 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.945 00:20:42.945 real 0m11.501s 00:20:42.945 user 0m15.377s 00:20:42.945 sys 0m5.881s 00:20:42.945 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.945 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.945 ************************************ 00:20:42.945 END TEST nvmf_bdevio_no_huge 00:20:42.945 ************************************ 00:20:43.203 12:07:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:43.203 12:07:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:43.203 12:07:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.203 12:07:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.203 ************************************ 00:20:43.203 START TEST nvmf_tls 00:20:43.203 ************************************ 00:20:43.203 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:43.203 * Looking for test storage... 00:20:43.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.203 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.203 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.204 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:49.767 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:49.767 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:49.767 Found net devices under 0000:af:00.0: cvl_0_0 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:49.767 Found net devices under 0000:af:00.1: cvl_0_1 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.767 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.768 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:20:49.768 00:20:49.768 --- 10.0.0.2 ping statistics --- 00:20:49.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.768 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:49.768 00:20:49.768 --- 10.0.0.1 ping statistics --- 00:20:49.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.768 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4164861 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4164861 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4164861 ']' 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.768 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.768 [2024-07-25 12:07:26.334918] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:20:49.768 [2024-07-25 12:07:26.334973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.768 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.768 [2024-07-25 12:07:26.423934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.768 [2024-07-25 12:07:26.533855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.768 [2024-07-25 12:07:26.533898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.768 [2024-07-25 12:07:26.533910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.768 [2024-07-25 12:07:26.533921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.768 [2024-07-25 12:07:26.533931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.768 [2024-07-25 12:07:26.533955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.335 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.335 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:50.335 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.336 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.336 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.336 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.336 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:50.336 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:50.594 true 00:20:50.594 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:50.594 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:50.852 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:50.852 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:50.852 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:51.110 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.110 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:51.369 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:51.369 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:51.369 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:51.628 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.628 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:51.886 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:51.886 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:51.886 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.886 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:52.144 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:52.144 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:52.144 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:52.403 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.403 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:52.662 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:52.662 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:52.662 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:52.920 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.920 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.9khDCbEuxF 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.RJVqLw1EkN 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.9khDCbEuxF 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RJVqLw1EkN 00:20:53.179 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:53.438 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:53.696 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.9khDCbEuxF 00:20:53.696 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9khDCbEuxF 00:20:53.696 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.954 [2024-07-25 12:07:31.180269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.954 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.213 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.473 [2024-07-25 12:07:31.665600] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.473 [2024-07-25 12:07:31.665859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.473 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.769 malloc0 00:20:54.769 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.027 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9khDCbEuxF 00:20:55.285 [2024-07-25 12:07:32.409957] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.285 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9khDCbEuxF 00:20:55.285 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.260 Initializing NVMe Controllers 00:21:05.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.260 Initialization complete. Launching workers. 00:21:05.260 ======================================================== 00:21:05.260 Latency(us) 00:21:05.260 Device Information : IOPS MiB/s Average min max 00:21:05.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8419.98 32.89 7603.20 1150.11 8318.77 00:21:05.260 ======================================================== 00:21:05.260 Total : 8419.98 32.89 7603.20 1150.11 8318.77 00:21:05.260 00:21:05.260 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9khDCbEuxF 00:21:05.260 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:05.260 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:05.260 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:05.260 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9khDCbEuxF' 00:21:05.260 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4167805 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4167805 /var/tmp/bdevperf.sock 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4167805 ']' 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:05.519 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.519 [2024-07-25 12:07:42.609524] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:05.519 [2024-07-25 12:07:42.609594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167805 ] 00:21:05.519 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.519 [2024-07-25 12:07:42.721725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.778 [2024-07-25 12:07:42.875310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.344 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.344 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:06.344 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9khDCbEuxF 00:21:06.603 [2024-07-25 12:07:43.793891] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.603 [2024-07-25 12:07:43.794049] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.603 TLSTESTn1 00:21:06.603 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.862 Running I/O for 10 seconds... 00:21:16.829 00:21:16.830 Latency(us) 00:21:16.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.830 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.830 Verification LBA range: start 0x0 length 0x2000 00:21:16.830 TLSTESTn1 : 10.02 2830.96 11.06 0.00 0.00 45091.50 10843.23 44564.48 00:21:16.830 =================================================================================================================== 00:21:16.830 Total : 2830.96 11.06 0.00 0.00 45091.50 10843.23 44564.48 00:21:16.830 0 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 4167805 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4167805 ']' 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4167805 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4167805 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4167805' 00:21:16.830 killing process with pid 4167805 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4167805 00:21:16.830 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.830 00:21:16.830 Latency(us) 00:21:16.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.830 =================================================================================================================== 00:21:16.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.830 [2024-07-25 12:07:54.129931] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.830 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4167805 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RJVqLw1EkN 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RJVqLw1EkN 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RJVqLw1EkN 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RJVqLw1EkN' 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4169901 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4169901 /var/tmp/bdevperf.sock 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4169901 ']' 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.398 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.398 [2024-07-25 12:07:54.534979] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:17.398 [2024-07-25 12:07:54.535052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169901 ] 00:21:17.398 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.398 [2024-07-25 12:07:54.648912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.657 [2024-07-25 12:07:54.790962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.223 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:18.223 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:18.223 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RJVqLw1EkN 00:21:18.482 [2024-07-25 12:07:55.647818] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.482 [2024-07-25 12:07:55.647979] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:18.482 [2024-07-25 12:07:55.656731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.482 [2024-07-25 12:07:55.656975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8af0 (107): Transport endpoint is not connected 00:21:18.482 [2024-07-25 12:07:55.657953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8af0 (9): Bad file descriptor 00:21:18.483 [2024-07-25 12:07:55.658952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:18.483 [2024-07-25 12:07:55.658986] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:18.483 [2024-07-25 12:07:55.659012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:18.483 request: 00:21:18.483 { 00:21:18.483 "name": "TLSTEST", 00:21:18.483 "trtype": "tcp", 00:21:18.483 "traddr": "10.0.0.2", 00:21:18.483 "adrfam": "ipv4", 00:21:18.483 "trsvcid": "4420", 00:21:18.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.483 "prchk_reftag": false, 00:21:18.483 "prchk_guard": false, 00:21:18.483 "hdgst": false, 00:21:18.483 "ddgst": false, 00:21:18.483 "psk": "/tmp/tmp.RJVqLw1EkN", 00:21:18.483 "method": "bdev_nvme_attach_controller", 00:21:18.483 "req_id": 1 00:21:18.483 } 00:21:18.483 Got JSON-RPC error response 00:21:18.483 response: 00:21:18.483 { 00:21:18.483 "code": -5, 00:21:18.483 "message": "Input/output error" 00:21:18.483 } 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 4169901 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4169901 ']' 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4169901 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4169901 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4169901' 00:21:18.483 killing process with pid 4169901 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4169901 00:21:18.483 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.483 00:21:18.483 Latency(us) 00:21:18.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.483 =================================================================================================================== 00:21:18.483 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.483 [2024-07-25 12:07:55.742312] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.483 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4169901 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9khDCbEuxF 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9khDCbEuxF 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9khDCbEuxF 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9khDCbEuxF' 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4170173 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4170173 /var/tmp/bdevperf.sock 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4170173 ']' 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.742 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.001 [2024-07-25 12:07:56.077621] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:19.001 [2024-07-25 12:07:56.077685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170173 ] 00:21:19.001 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.001 [2024-07-25 12:07:56.189821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.260 [2024-07-25 12:07:56.331808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.827 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.827 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:19.827 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.9khDCbEuxF 00:21:20.093 [2024-07-25 12:07:57.183917] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.093 [2024-07-25 12:07:57.184072] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.093 [2024-07-25 12:07:57.196792] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:20.093 [2024-07-25 12:07:57.196826] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:20.093 [2024-07-25 12:07:57.196864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.093 [2024-07-25 12:07:57.197109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a77af0 (107): Transport endpoint is not connected 00:21:20.093 [2024-07-25 12:07:57.198092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a77af0 (9): Bad file descriptor 00:21:20.093 [2024-07-25 12:07:57.199089] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:20.093 [2024-07-25 12:07:57.199117] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:20.093 [2024-07-25 12:07:57.199143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.093 request: 00:21:20.093 { 00:21:20.093 "name": "TLSTEST", 00:21:20.093 "trtype": "tcp", 00:21:20.093 "traddr": "10.0.0.2", 00:21:20.093 "adrfam": "ipv4", 00:21:20.093 "trsvcid": "4420", 00:21:20.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.093 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:20.093 "prchk_reftag": false, 00:21:20.093 "prchk_guard": false, 00:21:20.093 "hdgst": false, 00:21:20.093 "ddgst": false, 00:21:20.093 "psk": "/tmp/tmp.9khDCbEuxF", 00:21:20.093 "method": "bdev_nvme_attach_controller", 00:21:20.093 "req_id": 1 00:21:20.093 } 00:21:20.093 Got JSON-RPC error response 00:21:20.093 response: 00:21:20.093 { 00:21:20.093 "code": -5, 00:21:20.093 "message": "Input/output error" 00:21:20.093 } 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 4170173 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4170173 ']' 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4170173 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4170173 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4170173' 00:21:20.093 killing process with pid 4170173 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4170173 00:21:20.093 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.093 00:21:20.093 Latency(us) 00:21:20.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.093 =================================================================================================================== 00:21:20.093 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.093 [2024-07-25 12:07:57.283339] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.093 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4170173 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9khDCbEuxF 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9khDCbEuxF 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9khDCbEuxF 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9khDCbEuxF' 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4170448 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4170448 /var/tmp/bdevperf.sock 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4170448 ']' 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.352 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.352 [2024-07-25 12:07:57.615748] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:20.352 [2024-07-25 12:07:57.615814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170448 ] 00:21:20.352 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.611 [2024-07-25 12:07:57.728541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.611 [2024-07-25 12:07:57.874150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.548 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.548 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:21.548 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9khDCbEuxF 00:21:21.548 [2024-07-25 12:07:58.718104] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.548 [2024-07-25 12:07:58.718261] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:21.548 [2024-07-25 12:07:58.730829] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:21.548 [2024-07-25 12:07:58.730861] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:21.548 [2024-07-25 12:07:58.730900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:21.548 [2024-07-25 12:07:58.731303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b2af0 (107): Transport endpoint is not connected 00:21:21.548 [2024-07-25 12:07:58.732284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b2af0 (9): Bad file descriptor 00:21:21.548 [2024-07-25 12:07:58.733283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:21.548 [2024-07-25 12:07:58.733309] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:21.548 [2024-07-25 12:07:58.733334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:21.548 request: 00:21:21.548 { 00:21:21.548 "name": "TLSTEST", 00:21:21.548 "trtype": "tcp", 00:21:21.548 "traddr": "10.0.0.2", 00:21:21.548 "adrfam": "ipv4", 00:21:21.548 "trsvcid": "4420", 00:21:21.548 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.548 "prchk_reftag": false, 00:21:21.548 "prchk_guard": false, 00:21:21.548 "hdgst": false, 00:21:21.548 "ddgst": false, 00:21:21.548 "psk": "/tmp/tmp.9khDCbEuxF", 00:21:21.548 "method": "bdev_nvme_attach_controller", 00:21:21.548 "req_id": 1 00:21:21.549 } 00:21:21.549 Got JSON-RPC error response 00:21:21.549 response: 00:21:21.549 { 00:21:21.549 "code": -5, 00:21:21.549 "message": "Input/output error" 00:21:21.549 } 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 4170448 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4170448 ']' 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4170448 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4170448 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4170448' 00:21:21.549 killing process with pid 4170448 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4170448 00:21:21.549 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.549 00:21:21.549 Latency(us) 00:21:21.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.549 =================================================================================================================== 00:21:21.549 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.549 [2024-07-25 12:07:58.815378] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:21.549 12:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4170448 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4170718 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4170718 /var/tmp/bdevperf.sock 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4170718 ']' 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.117 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.117 [2024-07-25 12:07:59.194970] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:22.117 [2024-07-25 12:07:59.195022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170718 ] 00:21:22.117 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.117 [2024-07-25 12:07:59.297205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.375 [2024-07-25 12:07:59.438797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.943 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.943 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:22.943 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:22.943 [2024-07-25 12:08:00.224847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:22.943 [2024-07-25 12:08:00.226576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1521030 (9): Bad file descriptor 00:21:22.943 [2024-07-25 12:08:00.227570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.943 [2024-07-25 12:08:00.227615] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:22.943 [2024-07-25 12:08:00.227641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.943 request: 00:21:22.943 { 00:21:22.943 "name": "TLSTEST", 00:21:22.943 "trtype": "tcp", 00:21:22.943 "traddr": "10.0.0.2", 00:21:22.943 "adrfam": "ipv4", 00:21:22.943 "trsvcid": "4420", 00:21:22.943 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.943 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.943 "prchk_reftag": false, 00:21:22.943 "prchk_guard": false, 00:21:22.943 "hdgst": false, 00:21:22.943 "ddgst": false, 00:21:22.943 "method": "bdev_nvme_attach_controller", 00:21:22.943 "req_id": 1 00:21:22.943 } 00:21:22.943 Got JSON-RPC error response 00:21:22.943 response: 00:21:22.943 { 00:21:22.943 "code": -5, 00:21:22.943 "message": "Input/output error" 00:21:22.943 } 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 4170718 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4170718 ']' 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4170718 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4170718 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4170718' 00:21:23.202 killing process with pid 4170718 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4170718 00:21:23.202 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.202 00:21:23.202 Latency(us) 00:21:23.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.202 =================================================================================================================== 00:21:23.202 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:23.202 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4170718 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 4164861 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4164861 ']' 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4164861 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4164861 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4164861' 00:21:23.461 killing process with pid 4164861 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4164861 00:21:23.461 [2024-07-25 12:08:00.639350] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:23.461 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4164861 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.oOeGLQ1ImM 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.oOeGLQ1ImM 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4171076 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4171076 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4171076 ']' 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.720 12:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.720 [2024-07-25 12:08:01.017080] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:23.720 [2024-07-25 12:08:01.017144] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.979 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.979 [2024-07-25 12:08:01.105242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.979 [2024-07-25 12:08:01.210424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.979 [2024-07-25 12:08:01.210470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.979 [2024-07-25 12:08:01.210483] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.979 [2024-07-25 12:08:01.210494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.979 [2024-07-25 12:08:01.210504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.979 [2024-07-25 12:08:01.210537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.oOeGLQ1ImM 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oOeGLQ1ImM 00:21:24.914 12:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:25.173 [2024-07-25 12:08:02.459246] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.432 12:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:25.689 12:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:25.689 [2024-07-25 12:08:02.964638] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.689 [2024-07-25 12:08:02.964876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.689 12:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:25.947 malloc0 00:21:25.947 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:26.207 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oOeGLQ1ImM 00:21:26.464 [2024-07-25 12:08:03.701027] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oOeGLQ1ImM 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oOeGLQ1ImM' 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4171550 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4171550 /var/tmp/bdevperf.sock 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4171550 ']' 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.465 12:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.723 [2024-07-25 12:08:03.767920] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:26.723 [2024-07-25 12:08:03.767978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171550 ] 00:21:26.723 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.723 [2024-07-25 12:08:03.881144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.981 [2024-07-25 12:08:04.033465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.548 12:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.548 12:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:27.548 12:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oOeGLQ1ImM 00:21:27.548 [2024-07-25 12:08:04.843771] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.548 [2024-07-25 12:08:04.843923] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:27.806 TLSTESTn1 00:21:27.806 12:08:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:27.806 Running I/O for 10 seconds... 00:21:40.028 00:21:40.028 Latency(us) 00:21:40.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.028 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:40.028 Verification LBA range: start 0x0 length 0x2000 00:21:40.028 TLSTESTn1 : 10.03 2830.98 11.06 0.00 0.00 45068.28 12988.04 70540.57 00:21:40.028 =================================================================================================================== 00:21:40.028 Total : 2830.98 11.06 0.00 0.00 45068.28 12988.04 70540.57 00:21:40.028 0 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 4171550 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4171550 ']' 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4171550 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4171550 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:40.028 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4171550' 00:21:40.029 killing process with pid 4171550 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4171550 00:21:40.029 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.029 00:21:40.029 Latency(us) 00:21:40.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.029 =================================================================================================================== 00:21:40.029 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.029 [2024-07-25 12:08:15.204744] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4171550 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.oOeGLQ1ImM 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oOeGLQ1ImM 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oOeGLQ1ImM 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oOeGLQ1ImM 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oOeGLQ1ImM' 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4173639 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4173639 /var/tmp/bdevperf.sock 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4173639 ']' 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.029 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.029 [2024-07-25 12:08:15.602734] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:40.029 [2024-07-25 12:08:15.602806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173639 ] 00:21:40.029 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.029 [2024-07-25 12:08:15.717649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.029 [2024-07-25 12:08:15.858918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oOeGLQ1ImM 00:21:40.029 [2024-07-25 12:08:16.713504] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.029 [2024-07-25 12:08:16.713616] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:40.029 [2024-07-25 12:08:16.713638] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.oOeGLQ1ImM 00:21:40.029 request: 00:21:40.029 { 00:21:40.029 "name": "TLSTEST", 00:21:40.029 "trtype": "tcp", 00:21:40.029 "traddr": "10.0.0.2", 00:21:40.029 "adrfam": "ipv4", 00:21:40.029 "trsvcid": "4420", 00:21:40.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.029 "prchk_reftag": false, 00:21:40.029 "prchk_guard": false, 00:21:40.029 "hdgst": false, 00:21:40.029 "ddgst": false, 00:21:40.029 "psk": "/tmp/tmp.oOeGLQ1ImM", 00:21:40.029 "method": "bdev_nvme_attach_controller", 00:21:40.029 "req_id": 1 00:21:40.029 } 00:21:40.029 Got JSON-RPC error response 00:21:40.029 response: 00:21:40.029 { 00:21:40.029 "code": -1, 00:21:40.029 "message": "Operation not permitted" 00:21:40.029 } 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 4173639 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4173639 ']' 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4173639 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4173639 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4173639' 00:21:40.029 killing process with pid 4173639 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4173639 00:21:40.029 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.029 00:21:40.029 Latency(us) 00:21:40.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.029 =================================================================================================================== 00:21:40.029 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:40.029 12:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4173639 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 4171076 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4171076 ']' 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4171076 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4171076 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4171076' 00:21:40.029 killing process with pid 4171076 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4171076 00:21:40.029 [2024-07-25 12:08:17.123199] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.029 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4171076 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4173996 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4173996 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4173996 ']' 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.288 12:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.288 [2024-07-25 12:08:17.463461] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:40.288 [2024-07-25 12:08:17.463525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.288 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.288 [2024-07-25 12:08:17.551377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.547 [2024-07-25 12:08:17.657516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.547 [2024-07-25 12:08:17.657560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.547 [2024-07-25 12:08:17.657572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.547 [2024-07-25 12:08:17.657583] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.547 [2024-07-25 12:08:17.657593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.547 [2024-07-25 12:08:17.657624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.115 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.116 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.116 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.116 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.116 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.oOeGLQ1ImM 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oOeGLQ1ImM 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.oOeGLQ1ImM 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oOeGLQ1ImM 00:21:41.374 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:41.631 [2024-07-25 12:08:18.678531] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.631 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:41.890 12:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:41.890 [2024-07-25 12:08:19.171871] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.890 [2024-07-25 12:08:19.172112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.890 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:42.148 malloc0 00:21:42.407 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:42.407 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oOeGLQ1ImM 00:21:42.665 [2024-07-25 12:08:19.924409] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:42.665 [2024-07-25 12:08:19.924446] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:42.665 [2024-07-25 12:08:19.924488] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:42.665 request: 00:21:42.665 { 00:21:42.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.665 "host": "nqn.2016-06.io.spdk:host1", 00:21:42.665 "psk": "/tmp/tmp.oOeGLQ1ImM", 00:21:42.665 "method": "nvmf_subsystem_add_host", 00:21:42.665 "req_id": 1 00:21:42.665 } 00:21:42.665 Got JSON-RPC error response 00:21:42.665 response: 00:21:42.665 { 00:21:42.665 "code": -32603, 00:21:42.665 "message": "Internal error" 00:21:42.665 } 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 4173996 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4173996 ']' 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4173996 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.665 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4173996 00:21:42.924 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.924 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.924 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4173996' 00:21:42.924 killing process with pid 4173996 00:21:42.924 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4173996 00:21:42.924 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4173996 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.oOeGLQ1ImM 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4174484 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4174484 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4174484 ']' 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.183 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.183 [2024-07-25 12:08:20.317497] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:43.183 [2024-07-25 12:08:20.317566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.183 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.183 [2024-07-25 12:08:20.406911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.441 [2024-07-25 12:08:20.512886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.441 [2024-07-25 12:08:20.512931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.441 [2024-07-25 12:08:20.512944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.441 [2024-07-25 12:08:20.512956] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.441 [2024-07-25 12:08:20.512965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.441 [2024-07-25 12:08:20.512992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.oOeGLQ1ImM 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oOeGLQ1ImM 00:21:44.007 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:44.264 [2024-07-25 12:08:21.521477] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.264 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:44.525 12:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:44.784 [2024-07-25 12:08:22.014823] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.784 [2024-07-25 12:08:22.015063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.784 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:45.044 malloc0 00:21:45.044 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:45.303 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oOeGLQ1ImM 00:21:45.562 [2024-07-25 12:08:22.759242] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4175024 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4175024 /var/tmp/bdevperf.sock 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4175024 ']' 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.562 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.562 [2024-07-25 12:08:22.827468] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:45.562 [2024-07-25 12:08:22.827528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175024 ] 00:21:45.562 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.820 [2024-07-25 12:08:22.938997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.820 [2024-07-25 12:08:23.087435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.752 12:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.752 12:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:46.752 12:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oOeGLQ1ImM 00:21:46.752 [2024-07-25 12:08:23.999389] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.752 [2024-07-25 12:08:23.999537] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:47.010 TLSTESTn1 00:21:47.010 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:47.269 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:47.269 "subsystems": [ 00:21:47.269 { 00:21:47.269 "subsystem": "keyring", 00:21:47.269 "config": [] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "iobuf", 00:21:47.269 "config": [ 00:21:47.269 { 00:21:47.269 "method": "iobuf_set_options", 00:21:47.269 "params": { 00:21:47.269 "small_pool_count": 8192, 00:21:47.269 "large_pool_count": 1024, 00:21:47.269 "small_bufsize": 8192, 00:21:47.269 "large_bufsize": 135168 00:21:47.269 } 00:21:47.269 } 00:21:47.269 ] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "sock", 00:21:47.269 "config": [ 00:21:47.269 { 00:21:47.269 "method": "sock_set_default_impl", 00:21:47.269 "params": { 00:21:47.269 "impl_name": "posix" 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "sock_impl_set_options", 00:21:47.269 "params": { 00:21:47.269 "impl_name": "ssl", 00:21:47.269 "recv_buf_size": 4096, 00:21:47.269 "send_buf_size": 4096, 00:21:47.269 "enable_recv_pipe": true, 00:21:47.269 "enable_quickack": false, 00:21:47.269 "enable_placement_id": 0, 00:21:47.269 "enable_zerocopy_send_server": true, 00:21:47.269 "enable_zerocopy_send_client": false, 00:21:47.269 "zerocopy_threshold": 0, 00:21:47.269 "tls_version": 0, 00:21:47.269 "enable_ktls": false 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "sock_impl_set_options", 00:21:47.269 "params": { 00:21:47.269 "impl_name": "posix", 00:21:47.269 "recv_buf_size": 2097152, 00:21:47.269 "send_buf_size": 2097152, 00:21:47.269 "enable_recv_pipe": true, 00:21:47.269 "enable_quickack": false, 00:21:47.269 "enable_placement_id": 0, 00:21:47.269 "enable_zerocopy_send_server": true, 00:21:47.269 "enable_zerocopy_send_client": false, 00:21:47.269 "zerocopy_threshold": 0, 00:21:47.269 "tls_version": 0, 00:21:47.269 "enable_ktls": false 00:21:47.269 } 00:21:47.269 } 00:21:47.269 ] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "vmd", 00:21:47.269 "config": [] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "accel", 00:21:47.269 "config": [ 00:21:47.269 { 00:21:47.269 "method": "accel_set_options", 00:21:47.269 "params": { 00:21:47.269 "small_cache_size": 128, 00:21:47.269 "large_cache_size": 16, 00:21:47.269 "task_count": 2048, 00:21:47.269 "sequence_count": 2048, 00:21:47.269 "buf_count": 2048 00:21:47.269 } 00:21:47.269 } 00:21:47.269 ] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "bdev", 00:21:47.269 "config": [ 00:21:47.269 { 00:21:47.269 "method": "bdev_set_options", 00:21:47.269 "params": { 00:21:47.269 "bdev_io_pool_size": 65535, 00:21:47.269 "bdev_io_cache_size": 256, 00:21:47.269 "bdev_auto_examine": true, 00:21:47.269 "iobuf_small_cache_size": 128, 00:21:47.269 "iobuf_large_cache_size": 16 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "bdev_raid_set_options", 00:21:47.269 "params": { 00:21:47.269 "process_window_size_kb": 1024, 00:21:47.269 "process_max_bandwidth_mb_sec": 0 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "bdev_iscsi_set_options", 00:21:47.269 "params": { 00:21:47.269 "timeout_sec": 30 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "bdev_nvme_set_options", 00:21:47.269 "params": { 00:21:47.269 "action_on_timeout": "none", 00:21:47.269 "timeout_us": 0, 00:21:47.269 "timeout_admin_us": 0, 00:21:47.269 "keep_alive_timeout_ms": 10000, 00:21:47.269 "arbitration_burst": 0, 00:21:47.269 "low_priority_weight": 0, 00:21:47.269 "medium_priority_weight": 0, 00:21:47.269 "high_priority_weight": 0, 00:21:47.269 "nvme_adminq_poll_period_us": 10000, 00:21:47.269 "nvme_ioq_poll_period_us": 0, 00:21:47.269 "io_queue_requests": 0, 00:21:47.269 "delay_cmd_submit": true, 00:21:47.269 "transport_retry_count": 4, 00:21:47.269 "bdev_retry_count": 3, 00:21:47.269 "transport_ack_timeout": 0, 00:21:47.269 "ctrlr_loss_timeout_sec": 0, 00:21:47.269 "reconnect_delay_sec": 0, 00:21:47.269 "fast_io_fail_timeout_sec": 0, 00:21:47.269 "disable_auto_failback": false, 00:21:47.269 "generate_uuids": false, 00:21:47.269 "transport_tos": 0, 00:21:47.269 "nvme_error_stat": false, 00:21:47.269 "rdma_srq_size": 0, 00:21:47.269 "io_path_stat": false, 00:21:47.269 "allow_accel_sequence": false, 00:21:47.269 "rdma_max_cq_size": 0, 00:21:47.269 "rdma_cm_event_timeout_ms": 0, 00:21:47.269 "dhchap_digests": [ 00:21:47.269 "sha256", 00:21:47.269 "sha384", 00:21:47.269 "sha512" 00:21:47.269 ], 00:21:47.269 "dhchap_dhgroups": [ 00:21:47.269 "null", 00:21:47.269 "ffdhe2048", 00:21:47.269 "ffdhe3072", 00:21:47.269 "ffdhe4096", 00:21:47.269 "ffdhe6144", 00:21:47.269 "ffdhe8192" 00:21:47.269 ] 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "bdev_nvme_set_hotplug", 00:21:47.269 "params": { 00:21:47.269 "period_us": 100000, 00:21:47.269 "enable": false 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "bdev_malloc_create", 00:21:47.269 "params": { 00:21:47.269 "name": "malloc0", 00:21:47.269 "num_blocks": 8192, 00:21:47.269 "block_size": 4096, 00:21:47.269 "physical_block_size": 4096, 00:21:47.269 "uuid": "8fe32234-42d5-4bbc-b8dc-bed36ae98088", 00:21:47.269 "optimal_io_boundary": 0, 00:21:47.269 "md_size": 0, 00:21:47.269 "dif_type": 0, 00:21:47.269 "dif_is_head_of_md": false, 00:21:47.269 "dif_pi_format": 0 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "bdev_wait_for_examine" 00:21:47.269 } 00:21:47.269 ] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "nbd", 00:21:47.269 "config": [] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "scheduler", 00:21:47.269 "config": [ 00:21:47.269 { 00:21:47.269 "method": "framework_set_scheduler", 00:21:47.269 "params": { 00:21:47.269 "name": "static" 00:21:47.269 } 00:21:47.269 } 00:21:47.269 ] 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "subsystem": "nvmf", 00:21:47.269 "config": [ 00:21:47.269 { 00:21:47.269 "method": "nvmf_set_config", 00:21:47.269 "params": { 00:21:47.269 "discovery_filter": "match_any", 00:21:47.269 "admin_cmd_passthru": { 00:21:47.269 "identify_ctrlr": false 00:21:47.269 } 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "nvmf_set_max_subsystems", 00:21:47.269 "params": { 00:21:47.269 "max_subsystems": 1024 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "nvmf_set_crdt", 00:21:47.269 "params": { 00:21:47.269 "crdt1": 0, 00:21:47.269 "crdt2": 0, 00:21:47.269 "crdt3": 0 00:21:47.269 } 00:21:47.269 }, 00:21:47.269 { 00:21:47.269 "method": "nvmf_create_transport", 00:21:47.269 "params": { 00:21:47.269 "trtype": "TCP", 00:21:47.269 "max_queue_depth": 128, 00:21:47.269 "max_io_qpairs_per_ctrlr": 127, 00:21:47.270 "in_capsule_data_size": 4096, 00:21:47.270 "max_io_size": 131072, 00:21:47.270 "io_unit_size": 131072, 00:21:47.270 "max_aq_depth": 128, 00:21:47.270 "num_shared_buffers": 511, 00:21:47.270 "buf_cache_size": 4294967295, 00:21:47.270 "dif_insert_or_strip": false, 00:21:47.270 "zcopy": false, 00:21:47.270 "c2h_success": false, 00:21:47.270 "sock_priority": 0, 00:21:47.270 "abort_timeout_sec": 1, 00:21:47.270 "ack_timeout": 0, 00:21:47.270 "data_wr_pool_size": 0 00:21:47.270 } 00:21:47.270 }, 00:21:47.270 { 00:21:47.270 "method": "nvmf_create_subsystem", 00:21:47.270 "params": { 00:21:47.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.270 "allow_any_host": false, 00:21:47.270 "serial_number": "SPDK00000000000001", 00:21:47.270 "model_number": "SPDK bdev Controller", 00:21:47.270 "max_namespaces": 10, 00:21:47.270 "min_cntlid": 1, 00:21:47.270 "max_cntlid": 65519, 00:21:47.270 "ana_reporting": false 00:21:47.270 } 00:21:47.270 }, 00:21:47.270 { 00:21:47.270 "method": "nvmf_subsystem_add_host", 00:21:47.270 "params": { 00:21:47.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.270 "host": "nqn.2016-06.io.spdk:host1", 00:21:47.270 "psk": "/tmp/tmp.oOeGLQ1ImM" 00:21:47.270 } 00:21:47.270 }, 00:21:47.270 { 00:21:47.270 "method": "nvmf_subsystem_add_ns", 00:21:47.270 "params": { 00:21:47.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.270 "namespace": { 00:21:47.270 "nsid": 1, 00:21:47.270 "bdev_name": "malloc0", 00:21:47.270 "nguid": "8FE3223442D54BBCB8DCBED36AE98088", 00:21:47.270 "uuid": "8fe32234-42d5-4bbc-b8dc-bed36ae98088", 00:21:47.270 "no_auto_visible": false 00:21:47.270 } 00:21:47.270 } 00:21:47.270 }, 00:21:47.270 { 00:21:47.270 "method": "nvmf_subsystem_add_listener", 00:21:47.270 "params": { 00:21:47.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.270 "listen_address": { 00:21:47.270 "trtype": "TCP", 00:21:47.270 "adrfam": "IPv4", 00:21:47.270 "traddr": "10.0.0.2", 00:21:47.270 "trsvcid": "4420" 00:21:47.270 }, 00:21:47.270 "secure_channel": true 00:21:47.270 } 00:21:47.270 } 00:21:47.270 ] 00:21:47.270 } 00:21:47.270 ] 00:21:47.270 }' 00:21:47.270 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:47.529 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:47.529 "subsystems": [ 00:21:47.529 { 00:21:47.529 "subsystem": "keyring", 00:21:47.529 "config": [] 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "subsystem": "iobuf", 00:21:47.529 "config": [ 00:21:47.529 { 00:21:47.529 "method": "iobuf_set_options", 00:21:47.529 "params": { 00:21:47.529 "small_pool_count": 8192, 00:21:47.529 "large_pool_count": 1024, 00:21:47.529 "small_bufsize": 8192, 00:21:47.529 "large_bufsize": 135168 00:21:47.529 } 00:21:47.529 } 00:21:47.529 ] 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "subsystem": "sock", 00:21:47.529 "config": [ 00:21:47.529 { 00:21:47.529 "method": "sock_set_default_impl", 00:21:47.529 "params": { 00:21:47.529 "impl_name": "posix" 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "sock_impl_set_options", 00:21:47.529 "params": { 00:21:47.529 "impl_name": "ssl", 00:21:47.529 "recv_buf_size": 4096, 00:21:47.529 "send_buf_size": 4096, 00:21:47.529 "enable_recv_pipe": true, 00:21:47.529 "enable_quickack": false, 00:21:47.529 "enable_placement_id": 0, 00:21:47.529 "enable_zerocopy_send_server": true, 00:21:47.529 "enable_zerocopy_send_client": false, 00:21:47.529 "zerocopy_threshold": 0, 00:21:47.529 "tls_version": 0, 00:21:47.529 "enable_ktls": false 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "sock_impl_set_options", 00:21:47.529 "params": { 00:21:47.529 "impl_name": "posix", 00:21:47.529 "recv_buf_size": 2097152, 00:21:47.529 "send_buf_size": 2097152, 00:21:47.529 "enable_recv_pipe": true, 00:21:47.529 "enable_quickack": false, 00:21:47.529 "enable_placement_id": 0, 00:21:47.529 "enable_zerocopy_send_server": true, 00:21:47.529 "enable_zerocopy_send_client": false, 00:21:47.529 "zerocopy_threshold": 0, 00:21:47.529 "tls_version": 0, 00:21:47.529 "enable_ktls": false 00:21:47.529 } 00:21:47.529 } 00:21:47.529 ] 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "subsystem": "vmd", 00:21:47.529 "config": [] 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "subsystem": "accel", 00:21:47.529 "config": [ 00:21:47.529 { 00:21:47.529 "method": "accel_set_options", 00:21:47.529 "params": { 00:21:47.529 "small_cache_size": 128, 00:21:47.529 "large_cache_size": 16, 00:21:47.529 "task_count": 2048, 00:21:47.529 "sequence_count": 2048, 00:21:47.529 "buf_count": 2048 00:21:47.529 } 00:21:47.529 } 00:21:47.529 ] 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "subsystem": "bdev", 00:21:47.529 "config": [ 00:21:47.529 { 00:21:47.529 "method": "bdev_set_options", 00:21:47.529 "params": { 00:21:47.529 "bdev_io_pool_size": 65535, 00:21:47.529 "bdev_io_cache_size": 256, 00:21:47.529 "bdev_auto_examine": true, 00:21:47.529 "iobuf_small_cache_size": 128, 00:21:47.529 "iobuf_large_cache_size": 16 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "bdev_raid_set_options", 00:21:47.529 "params": { 00:21:47.529 "process_window_size_kb": 1024, 00:21:47.529 "process_max_bandwidth_mb_sec": 0 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "bdev_iscsi_set_options", 00:21:47.529 "params": { 00:21:47.529 "timeout_sec": 30 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "bdev_nvme_set_options", 00:21:47.529 "params": { 00:21:47.529 "action_on_timeout": "none", 00:21:47.529 "timeout_us": 0, 00:21:47.529 "timeout_admin_us": 0, 00:21:47.529 "keep_alive_timeout_ms": 10000, 00:21:47.529 "arbitration_burst": 0, 00:21:47.529 "low_priority_weight": 0, 00:21:47.529 "medium_priority_weight": 0, 00:21:47.529 "high_priority_weight": 0, 00:21:47.529 "nvme_adminq_poll_period_us": 10000, 00:21:47.529 "nvme_ioq_poll_period_us": 0, 00:21:47.529 "io_queue_requests": 512, 00:21:47.529 "delay_cmd_submit": true, 00:21:47.529 "transport_retry_count": 4, 00:21:47.529 "bdev_retry_count": 3, 00:21:47.529 "transport_ack_timeout": 0, 00:21:47.529 "ctrlr_loss_timeout_sec": 0, 00:21:47.529 "reconnect_delay_sec": 0, 00:21:47.529 "fast_io_fail_timeout_sec": 0, 00:21:47.529 "disable_auto_failback": false, 00:21:47.529 "generate_uuids": false, 00:21:47.529 "transport_tos": 0, 00:21:47.529 "nvme_error_stat": false, 00:21:47.529 "rdma_srq_size": 0, 00:21:47.529 "io_path_stat": false, 00:21:47.529 "allow_accel_sequence": false, 00:21:47.529 "rdma_max_cq_size": 0, 00:21:47.529 "rdma_cm_event_timeout_ms": 0, 00:21:47.529 "dhchap_digests": [ 00:21:47.529 "sha256", 00:21:47.529 "sha384", 00:21:47.529 "sha512" 00:21:47.529 ], 00:21:47.529 "dhchap_dhgroups": [ 00:21:47.529 "null", 00:21:47.529 "ffdhe2048", 00:21:47.529 "ffdhe3072", 00:21:47.529 "ffdhe4096", 00:21:47.529 "ffdhe6144", 00:21:47.529 "ffdhe8192" 00:21:47.529 ] 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "bdev_nvme_attach_controller", 00:21:47.529 "params": { 00:21:47.529 "name": "TLSTEST", 00:21:47.529 "trtype": "TCP", 00:21:47.529 "adrfam": "IPv4", 00:21:47.529 "traddr": "10.0.0.2", 00:21:47.529 "trsvcid": "4420", 00:21:47.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.529 "prchk_reftag": false, 00:21:47.529 "prchk_guard": false, 00:21:47.529 "ctrlr_loss_timeout_sec": 0, 00:21:47.529 "reconnect_delay_sec": 0, 00:21:47.529 "fast_io_fail_timeout_sec": 0, 00:21:47.529 "psk": "/tmp/tmp.oOeGLQ1ImM", 00:21:47.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.529 "hdgst": false, 00:21:47.529 "ddgst": false 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "bdev_nvme_set_hotplug", 00:21:47.529 "params": { 00:21:47.529 "period_us": 100000, 00:21:47.529 "enable": false 00:21:47.529 } 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "method": "bdev_wait_for_examine" 00:21:47.529 } 00:21:47.529 ] 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "subsystem": "nbd", 00:21:47.529 "config": [] 00:21:47.529 } 00:21:47.529 ] 00:21:47.529 }' 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 4175024 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4175024 ']' 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4175024 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4175024 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4175024' 00:21:47.530 killing process with pid 4175024 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4175024 00:21:47.530 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.530 00:21:47.530 Latency(us) 00:21:47.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.530 =================================================================================================================== 00:21:47.530 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.530 [2024-07-25 12:08:24.821171] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:47.530 12:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4175024 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 4174484 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4174484 ']' 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4174484 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4174484 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4174484' 00:21:48.097 killing process with pid 4174484 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4174484 00:21:48.097 [2024-07-25 12:08:25.225083] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:48.097 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4174484 00:21:48.356 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:48.356 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.356 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.356 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:48.356 "subsystems": [ 00:21:48.356 { 00:21:48.356 "subsystem": "keyring", 00:21:48.356 "config": [] 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "subsystem": "iobuf", 00:21:48.356 "config": [ 00:21:48.356 { 00:21:48.356 "method": "iobuf_set_options", 00:21:48.356 "params": { 00:21:48.356 "small_pool_count": 8192, 00:21:48.356 "large_pool_count": 1024, 00:21:48.356 "small_bufsize": 8192, 00:21:48.356 "large_bufsize": 135168 00:21:48.356 } 00:21:48.356 } 00:21:48.356 ] 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "subsystem": "sock", 00:21:48.356 "config": [ 00:21:48.356 { 00:21:48.356 "method": "sock_set_default_impl", 00:21:48.356 "params": { 00:21:48.356 "impl_name": "posix" 00:21:48.356 } 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "method": "sock_impl_set_options", 00:21:48.356 "params": { 00:21:48.356 "impl_name": "ssl", 00:21:48.356 "recv_buf_size": 4096, 00:21:48.356 "send_buf_size": 4096, 00:21:48.356 "enable_recv_pipe": true, 00:21:48.356 "enable_quickack": false, 00:21:48.356 "enable_placement_id": 0, 00:21:48.356 "enable_zerocopy_send_server": true, 00:21:48.356 "enable_zerocopy_send_client": false, 00:21:48.356 "zerocopy_threshold": 0, 00:21:48.356 "tls_version": 0, 00:21:48.356 "enable_ktls": false 00:21:48.356 } 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "method": "sock_impl_set_options", 00:21:48.356 "params": { 00:21:48.356 "impl_name": "posix", 00:21:48.356 "recv_buf_size": 2097152, 00:21:48.356 "send_buf_size": 2097152, 00:21:48.356 "enable_recv_pipe": true, 00:21:48.356 "enable_quickack": false, 00:21:48.356 "enable_placement_id": 0, 00:21:48.356 "enable_zerocopy_send_server": true, 00:21:48.356 "enable_zerocopy_send_client": false, 00:21:48.356 "zerocopy_threshold": 0, 00:21:48.356 "tls_version": 0, 00:21:48.356 "enable_ktls": false 00:21:48.356 } 00:21:48.356 } 00:21:48.356 ] 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "subsystem": "vmd", 00:21:48.356 "config": [] 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "subsystem": "accel", 00:21:48.356 "config": [ 00:21:48.356 { 00:21:48.356 "method": "accel_set_options", 00:21:48.356 "params": { 00:21:48.356 "small_cache_size": 128, 00:21:48.356 "large_cache_size": 16, 00:21:48.356 "task_count": 2048, 00:21:48.356 "sequence_count": 2048, 00:21:48.356 "buf_count": 2048 00:21:48.356 } 00:21:48.356 } 00:21:48.356 ] 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "subsystem": "bdev", 00:21:48.356 "config": [ 00:21:48.356 { 00:21:48.356 "method": "bdev_set_options", 00:21:48.356 "params": { 00:21:48.356 "bdev_io_pool_size": 65535, 00:21:48.356 "bdev_io_cache_size": 256, 00:21:48.356 "bdev_auto_examine": true, 00:21:48.356 "iobuf_small_cache_size": 128, 00:21:48.356 "iobuf_large_cache_size": 16 00:21:48.356 } 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "method": "bdev_raid_set_options", 00:21:48.356 "params": { 00:21:48.356 "process_window_size_kb": 1024, 00:21:48.356 "process_max_bandwidth_mb_sec": 0 00:21:48.356 } 00:21:48.356 }, 00:21:48.356 { 00:21:48.356 "method": "bdev_iscsi_set_options", 00:21:48.356 "params": { 00:21:48.356 "timeout_sec": 30 00:21:48.356 } 00:21:48.356 }, 00:21:48.356 { 00:21:48.357 "method": "bdev_nvme_set_options", 00:21:48.357 "params": { 00:21:48.357 "action_on_timeout": "none", 00:21:48.357 "timeout_us": 0, 00:21:48.357 "timeout_admin_us": 0, 00:21:48.357 "keep_alive_timeout_ms": 10000, 00:21:48.357 "arbitration_burst": 0, 00:21:48.357 "low_priority_weight": 0, 00:21:48.357 "medium_priority_weight": 0, 00:21:48.357 "high_priority_weight": 0, 00:21:48.357 "nvme_adminq_poll_period_us": 10000, 00:21:48.357 "nvme_ioq_poll_period_us": 0, 00:21:48.357 "io_queue_requests": 0, 00:21:48.357 "delay_cmd_submit": true, 00:21:48.357 "transport_retry_count": 4, 00:21:48.357 "bdev_retry_count": 3, 00:21:48.357 "transport_ack_timeout": 0, 00:21:48.357 "ctrlr_loss_timeout_sec": 0, 00:21:48.357 "reconnect_delay_sec": 0, 00:21:48.357 "fast_io_fail_timeout_sec": 0, 00:21:48.357 "disable_auto_failback": false, 00:21:48.357 "generate_uuids": false, 00:21:48.357 "transport_tos": 0, 00:21:48.357 "nvme_error_stat": false, 00:21:48.357 "rdma_srq_size": 0, 00:21:48.357 "io_path_stat": false, 00:21:48.357 "allow_accel_sequence": false, 00:21:48.357 "rdma_max_cq_size": 0, 00:21:48.357 "rdma_cm_event_timeout_ms": 0, 00:21:48.357 "dhchap_digests": [ 00:21:48.357 "sha256", 00:21:48.357 "sha384", 00:21:48.357 "sha512" 00:21:48.357 ], 00:21:48.357 "dhchap_dhgroups": [ 00:21:48.357 "null", 00:21:48.357 "ffdhe2048", 00:21:48.357 "ffdhe3072", 00:21:48.357 "ffdhe4096", 00:21:48.357 "ffdhe6144", 00:21:48.357 "ffdhe8192" 00:21:48.357 ] 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "bdev_nvme_set_hotplug", 00:21:48.357 "params": { 00:21:48.357 "period_us": 100000, 00:21:48.357 "enable": false 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "bdev_malloc_create", 00:21:48.357 "params": { 00:21:48.357 "name": "malloc0", 00:21:48.357 "num_blocks": 8192, 00:21:48.357 "block_size": 4096, 00:21:48.357 "physical_block_size": 4096, 00:21:48.357 "uuid": "8fe32234-42d5-4bbc-b8dc-bed36ae98088", 00:21:48.357 "optimal_io_boundary": 0, 00:21:48.357 "md_size": 0, 00:21:48.357 "dif_type": 0, 00:21:48.357 "dif_is_head_of_md": false, 00:21:48.357 "dif_pi_format": 0 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "bdev_wait_for_examine" 00:21:48.357 } 00:21:48.357 ] 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "subsystem": "nbd", 00:21:48.357 "config": [] 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "subsystem": "scheduler", 00:21:48.357 "config": [ 00:21:48.357 { 00:21:48.357 "method": "framework_set_scheduler", 00:21:48.357 "params": { 00:21:48.357 "name": "static" 00:21:48.357 } 00:21:48.357 } 00:21:48.357 ] 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "subsystem": "nvmf", 00:21:48.357 "config": [ 00:21:48.357 { 00:21:48.357 "method": "nvmf_set_config", 00:21:48.357 "params": { 00:21:48.357 "discovery_filter": "match_any", 00:21:48.357 "admin_cmd_passthru": { 00:21:48.357 "identify_ctrlr": false 00:21:48.357 } 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "nvmf_set_max_subsystems", 00:21:48.357 "params": { 00:21:48.357 "max_subsystems": 1024 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "nvmf_set_crdt", 00:21:48.357 "params": { 00:21:48.357 "crdt1": 0, 00:21:48.357 "crdt2": 0, 00:21:48.357 "crdt3": 0 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "nvmf_create_transport", 00:21:48.357 "params": { 00:21:48.357 "trtype": "TCP", 00:21:48.357 "max_queue_depth": 128, 00:21:48.357 "max_io_qpairs_per_ctrlr": 127, 00:21:48.357 "in_capsule_data_size": 4096, 00:21:48.357 "max_io_size": 131072, 00:21:48.357 "io_unit_size": 131072, 00:21:48.357 "max_aq_depth": 128, 00:21:48.357 "num_shared_buffers": 511, 00:21:48.357 "buf_cache_size": 4294967295, 00:21:48.357 "dif_insert_or_strip": false, 00:21:48.357 "zcopy": false, 00:21:48.357 "c2h_success": false, 00:21:48.357 "sock_priority": 0, 00:21:48.357 "abort_timeout_sec": 1, 00:21:48.357 "ack_timeout": 0, 00:21:48.357 "data_wr_pool_size": 0 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "nvmf_create_subsystem", 00:21:48.357 "params": { 00:21:48.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.357 "allow_any_host": false, 00:21:48.357 "serial_number": "SPDK00000000000001", 00:21:48.357 "model_number": "SPDK bdev Controller", 00:21:48.357 "max_namespaces": 10, 00:21:48.357 "min_cntlid": 1, 00:21:48.357 "max_cntlid": 65519, 00:21:48.357 "ana_reporting": false 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "nvmf_subsystem_add_host", 00:21:48.357 "params": { 00:21:48.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.357 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.357 "psk": "/tmp/tmp.oOeGLQ1ImM" 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "nvmf_subsystem_add_ns", 00:21:48.357 "params": { 00:21:48.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.357 "namespace": { 00:21:48.357 "nsid": 1, 00:21:48.357 "bdev_name": "malloc0", 00:21:48.357 "nguid": "8FE3223442D54BBCB8DCBED36AE98088", 00:21:48.357 "uuid": "8fe32234-42d5-4bbc-b8dc-bed36ae98088", 00:21:48.357 "no_auto_visible": false 00:21:48.357 } 00:21:48.357 } 00:21:48.357 }, 00:21:48.357 { 00:21:48.357 "method": "nvmf_subsystem_add_listener", 00:21:48.357 "params": { 00:21:48.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.357 "listen_address": { 00:21:48.357 "trtype": "TCP", 00:21:48.357 "adrfam": "IPv4", 00:21:48.357 "traddr": "10.0.0.2", 00:21:48.357 "trsvcid": "4420" 00:21:48.357 }, 00:21:48.357 "secure_channel": true 00:21:48.357 } 00:21:48.357 } 00:21:48.357 ] 00:21:48.357 } 00:21:48.357 ] 00:21:48.357 }' 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4175561 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4175561 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4175561 ']' 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.357 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.357 [2024-07-25 12:08:25.526507] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:48.357 [2024-07-25 12:08:25.526564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.357 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.357 [2024-07-25 12:08:25.612386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.616 [2024-07-25 12:08:25.717900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.616 [2024-07-25 12:08:25.717944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.616 [2024-07-25 12:08:25.717957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.616 [2024-07-25 12:08:25.717969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.616 [2024-07-25 12:08:25.717978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.616 [2024-07-25 12:08:25.718045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.874 [2024-07-25 12:08:25.936863] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.874 [2024-07-25 12:08:25.960056] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:48.874 [2024-07-25 12:08:25.976139] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.874 [2024-07-25 12:08:25.976372] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4175722 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4175722 /var/tmp/bdevperf.sock 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4175722 ']' 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.442 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:49.442 "subsystems": [ 00:21:49.442 { 00:21:49.442 "subsystem": "keyring", 00:21:49.442 "config": [] 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "subsystem": "iobuf", 00:21:49.442 "config": [ 00:21:49.442 { 00:21:49.442 "method": "iobuf_set_options", 00:21:49.442 "params": { 00:21:49.442 "small_pool_count": 8192, 00:21:49.442 "large_pool_count": 1024, 00:21:49.442 "small_bufsize": 8192, 00:21:49.442 "large_bufsize": 135168 00:21:49.442 } 00:21:49.442 } 00:21:49.442 ] 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "subsystem": "sock", 00:21:49.442 "config": [ 00:21:49.442 { 00:21:49.442 "method": "sock_set_default_impl", 00:21:49.442 "params": { 00:21:49.442 "impl_name": "posix" 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "sock_impl_set_options", 00:21:49.442 "params": { 00:21:49.442 "impl_name": "ssl", 00:21:49.442 "recv_buf_size": 4096, 00:21:49.442 "send_buf_size": 4096, 00:21:49.442 "enable_recv_pipe": true, 00:21:49.442 "enable_quickack": false, 00:21:49.442 "enable_placement_id": 0, 00:21:49.442 "enable_zerocopy_send_server": true, 00:21:49.442 "enable_zerocopy_send_client": false, 00:21:49.442 "zerocopy_threshold": 0, 00:21:49.442 "tls_version": 0, 00:21:49.442 "enable_ktls": false 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "sock_impl_set_options", 00:21:49.442 "params": { 00:21:49.442 "impl_name": "posix", 00:21:49.442 "recv_buf_size": 2097152, 00:21:49.442 "send_buf_size": 2097152, 00:21:49.442 "enable_recv_pipe": true, 00:21:49.442 "enable_quickack": false, 00:21:49.442 "enable_placement_id": 0, 00:21:49.442 "enable_zerocopy_send_server": true, 00:21:49.442 "enable_zerocopy_send_client": false, 00:21:49.442 "zerocopy_threshold": 0, 00:21:49.442 "tls_version": 0, 00:21:49.442 "enable_ktls": false 00:21:49.442 } 00:21:49.442 } 00:21:49.442 ] 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "subsystem": "vmd", 00:21:49.442 "config": [] 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "subsystem": "accel", 00:21:49.442 "config": [ 00:21:49.442 { 00:21:49.442 "method": "accel_set_options", 00:21:49.442 "params": { 00:21:49.442 "small_cache_size": 128, 00:21:49.442 "large_cache_size": 16, 00:21:49.442 "task_count": 2048, 00:21:49.442 "sequence_count": 2048, 00:21:49.442 "buf_count": 2048 00:21:49.442 } 00:21:49.442 } 00:21:49.442 ] 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "subsystem": "bdev", 00:21:49.442 "config": [ 00:21:49.442 { 00:21:49.442 "method": "bdev_set_options", 00:21:49.442 "params": { 00:21:49.442 "bdev_io_pool_size": 65535, 00:21:49.442 "bdev_io_cache_size": 256, 00:21:49.442 "bdev_auto_examine": true, 00:21:49.442 "iobuf_small_cache_size": 128, 00:21:49.442 "iobuf_large_cache_size": 16 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "bdev_raid_set_options", 00:21:49.442 "params": { 00:21:49.442 "process_window_size_kb": 1024, 00:21:49.442 "process_max_bandwidth_mb_sec": 0 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "bdev_iscsi_set_options", 00:21:49.442 "params": { 00:21:49.442 "timeout_sec": 30 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "bdev_nvme_set_options", 00:21:49.442 "params": { 00:21:49.442 "action_on_timeout": "none", 00:21:49.442 "timeout_us": 0, 00:21:49.442 "timeout_admin_us": 0, 00:21:49.442 "keep_alive_timeout_ms": 10000, 00:21:49.442 "arbitration_burst": 0, 00:21:49.442 "low_priority_weight": 0, 00:21:49.442 "medium_priority_weight": 0, 00:21:49.442 "high_priority_weight": 0, 00:21:49.442 "nvme_adminq_poll_period_us": 10000, 00:21:49.442 "nvme_ioq_poll_period_us": 0, 00:21:49.442 "io_queue_requests": 512, 00:21:49.442 "delay_cmd_submit": true, 00:21:49.442 "transport_retry_count": 4, 00:21:49.442 "bdev_retry_count": 3, 00:21:49.442 "transport_ack_timeout": 0, 00:21:49.442 "ctrlr_loss_timeout_sec": 0, 00:21:49.442 "reconnect_delay_sec": 0, 00:21:49.442 "fast_io_fail_timeout_sec": 0, 00:21:49.442 "disable_auto_failback": false, 00:21:49.442 "generate_uuids": false, 00:21:49.442 "transport_tos": 0, 00:21:49.442 "nvme_error_stat": false, 00:21:49.442 "rdma_srq_size": 0, 00:21:49.442 "io_path_stat": false, 00:21:49.442 "allow_accel_sequence": false, 00:21:49.442 "rdma_max_cq_size": 0, 00:21:49.442 "rdma_cm_event_timeout_ms": 0, 00:21:49.442 "dhchap_digests": [ 00:21:49.442 "sha256", 00:21:49.442 "sha384", 00:21:49.442 "sha512" 00:21:49.442 ], 00:21:49.442 "dhchap_dhgroups": [ 00:21:49.442 "null", 00:21:49.442 "ffdhe2048", 00:21:49.442 "ffdhe3072", 00:21:49.442 "ffdhe4096", 00:21:49.442 "ffdhe6144", 00:21:49.442 "ffdhe8192" 00:21:49.442 ] 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "bdev_nvme_attach_controller", 00:21:49.442 "params": { 00:21:49.442 "name": "TLSTEST", 00:21:49.442 "trtype": "TCP", 00:21:49.442 "adrfam": "IPv4", 00:21:49.442 "traddr": "10.0.0.2", 00:21:49.442 "trsvcid": "4420", 00:21:49.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.442 "prchk_reftag": false, 00:21:49.442 "prchk_guard": false, 00:21:49.442 "ctrlr_loss_timeout_sec": 0, 00:21:49.442 "reconnect_delay_sec": 0, 00:21:49.442 "fast_io_fail_timeout_sec": 0, 00:21:49.442 "psk": "/tmp/tmp.oOeGLQ1ImM", 00:21:49.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.442 "hdgst": false, 00:21:49.442 "ddgst": false 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "bdev_nvme_set_hotplug", 00:21:49.442 "params": { 00:21:49.442 "period_us": 100000, 00:21:49.442 "enable": false 00:21:49.442 } 00:21:49.442 }, 00:21:49.442 { 00:21:49.442 "method": "bdev_wait_for_examine" 00:21:49.443 } 00:21:49.443 ] 00:21:49.443 }, 00:21:49.443 { 00:21:49.443 "subsystem": "nbd", 00:21:49.443 "config": [] 00:21:49.443 } 00:21:49.443 ] 00:21:49.443 }' 00:21:49.443 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.443 [2024-07-25 12:08:26.548650] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:49.443 [2024-07-25 12:08:26.548715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175722 ] 00:21:49.443 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.443 [2024-07-25 12:08:26.662344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.701 [2024-07-25 12:08:26.814239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.960 [2024-07-25 12:08:27.023464] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.960 [2024-07-25 12:08:27.023641] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:50.218 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.218 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:50.218 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:50.477 Running I/O for 10 seconds... 00:22:00.493 00:22:00.493 Latency(us) 00:22:00.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.493 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.493 Verification LBA range: start 0x0 length 0x2000 00:22:00.493 TLSTESTn1 : 10.02 2854.98 11.15 0.00 0.00 44713.68 9532.51 48377.48 00:22:00.493 =================================================================================================================== 00:22:00.493 Total : 2854.98 11.15 0.00 0.00 44713.68 9532.51 48377.48 00:22:00.493 0 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 4175722 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4175722 ']' 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4175722 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4175722 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4175722' 00:22:00.493 killing process with pid 4175722 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4175722 00:22:00.493 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.493 00:22:00.493 Latency(us) 00:22:00.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.493 =================================================================================================================== 00:22:00.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.493 [2024-07-25 12:08:37.664619] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:00.493 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4175722 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 4175561 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4175561 ']' 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4175561 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4175561 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4175561' 00:22:00.752 killing process with pid 4175561 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4175561 00:22:00.752 [2024-07-25 12:08:37.997566] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:00.752 12:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4175561 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4177736 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4177736 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4177736 ']' 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.010 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.269 [2024-07-25 12:08:38.327779] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:01.269 [2024-07-25 12:08:38.327852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.269 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.269 [2024-07-25 12:08:38.415562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.269 [2024-07-25 12:08:38.502561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.269 [2024-07-25 12:08:38.502610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.269 [2024-07-25 12:08:38.502621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.269 [2024-07-25 12:08:38.502630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.269 [2024-07-25 12:08:38.502637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.269 [2024-07-25 12:08:38.502659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.oOeGLQ1ImM 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oOeGLQ1ImM 00:22:01.528 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:01.838 [2024-07-25 12:08:38.867442] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.838 12:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:02.096 12:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:02.096 [2024-07-25 12:08:39.364761] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.096 [2024-07-25 12:08:39.364974] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.096 12:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:02.355 malloc0 00:22:02.355 12:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.613 12:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oOeGLQ1ImM 00:22:02.872 [2024-07-25 12:08:40.116095] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4178226 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4178226 /var/tmp/bdevperf.sock 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4178226 ']' 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.872 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.130 [2024-07-25 12:08:40.188690] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:03.130 [2024-07-25 12:08:40.188752] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4178226 ] 00:22:03.130 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.130 [2024-07-25 12:08:40.270997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.130 [2024-07-25 12:08:40.375063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.066 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.066 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:04.066 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oOeGLQ1ImM 00:22:04.325 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:04.583 [2024-07-25 12:08:41.635256] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.583 nvme0n1 00:22:04.583 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.583 Running I/O for 1 seconds... 00:22:05.959 00:22:05.959 Latency(us) 00:22:05.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.959 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:05.959 Verification LBA range: start 0x0 length 0x2000 00:22:05.959 nvme0n1 : 1.02 3632.95 14.19 0.00 0.00 34795.58 9889.98 47900.86 00:22:05.959 =================================================================================================================== 00:22:05.959 Total : 3632.95 14.19 0.00 0.00 34795.58 9889.98 47900.86 00:22:05.959 0 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 4178226 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4178226 ']' 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4178226 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4178226 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4178226' 00:22:05.959 killing process with pid 4178226 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4178226 00:22:05.959 Received shutdown signal, test time was about 1.000000 seconds 00:22:05.959 00:22:05.959 Latency(us) 00:22:05.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.959 =================================================================================================================== 00:22:05.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.959 12:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4178226 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 4177736 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4177736 ']' 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4177736 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4177736 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4177736' 00:22:05.959 killing process with pid 4177736 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4177736 00:22:05.959 [2024-07-25 12:08:43.226352] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:05.959 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4177736 00:22:06.216 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:06.216 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.216 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.216 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.216 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4178767 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4178767 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4178767 ']' 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.217 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.217 [2024-07-25 12:08:43.500738] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:06.217 [2024-07-25 12:08:43.500787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.475 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.475 [2024-07-25 12:08:43.572406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.475 [2024-07-25 12:08:43.658339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.475 [2024-07-25 12:08:43.658382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.475 [2024-07-25 12:08:43.658393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.475 [2024-07-25 12:08:43.658402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.475 [2024-07-25 12:08:43.658411] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.475 [2024-07-25 12:08:43.658439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.475 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.475 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:06.475 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.475 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.475 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.734 [2024-07-25 12:08:43.801961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.734 malloc0 00:22:06.734 [2024-07-25 12:08:43.831524] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.734 [2024-07-25 12:08:43.852969] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=4178787 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 4178787 /var/tmp/bdevperf.sock 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4178787 ']' 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.734 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.734 [2024-07-25 12:08:43.925640] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:06.734 [2024-07-25 12:08:43.925692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4178787 ] 00:22:06.734 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.734 [2024-07-25 12:08:44.005843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.992 [2024-07-25 12:08:44.108154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.559 12:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.559 12:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:07.559 12:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oOeGLQ1ImM 00:22:07.817 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:08.074 [2024-07-25 12:08:45.256062] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.074 nvme0n1 00:22:08.074 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:08.333 Running I/O for 1 seconds... 00:22:09.267 00:22:09.267 Latency(us) 00:22:09.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.267 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:09.267 Verification LBA range: start 0x0 length 0x2000 00:22:09.267 nvme0n1 : 1.02 3326.21 12.99 0.00 0.00 38063.28 8877.15 89128.96 00:22:09.267 =================================================================================================================== 00:22:09.267 Total : 3326.21 12.99 0.00 0.00 38063.28 8877.15 89128.96 00:22:09.267 0 00:22:09.267 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:09.267 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.267 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.525 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.525 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:09.525 "subsystems": [ 00:22:09.525 { 00:22:09.525 "subsystem": "keyring", 00:22:09.525 "config": [ 00:22:09.525 { 00:22:09.525 "method": "keyring_file_add_key", 00:22:09.525 "params": { 00:22:09.525 "name": "key0", 00:22:09.525 "path": "/tmp/tmp.oOeGLQ1ImM" 00:22:09.525 } 00:22:09.525 } 00:22:09.525 ] 00:22:09.525 }, 00:22:09.525 { 00:22:09.525 "subsystem": "iobuf", 00:22:09.525 "config": [ 00:22:09.525 { 00:22:09.525 "method": "iobuf_set_options", 00:22:09.525 "params": { 00:22:09.525 "small_pool_count": 8192, 00:22:09.525 "large_pool_count": 1024, 00:22:09.525 "small_bufsize": 8192, 00:22:09.525 "large_bufsize": 135168 00:22:09.525 } 00:22:09.525 } 00:22:09.525 ] 00:22:09.525 }, 00:22:09.525 { 00:22:09.525 "subsystem": "sock", 00:22:09.525 "config": [ 00:22:09.525 { 00:22:09.526 "method": "sock_set_default_impl", 00:22:09.526 "params": { 00:22:09.526 "impl_name": "posix" 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "sock_impl_set_options", 00:22:09.526 "params": { 00:22:09.526 "impl_name": "ssl", 00:22:09.526 "recv_buf_size": 4096, 00:22:09.526 "send_buf_size": 4096, 00:22:09.526 "enable_recv_pipe": true, 00:22:09.526 "enable_quickack": false, 00:22:09.526 "enable_placement_id": 0, 00:22:09.526 "enable_zerocopy_send_server": true, 00:22:09.526 "enable_zerocopy_send_client": false, 00:22:09.526 "zerocopy_threshold": 0, 00:22:09.526 "tls_version": 0, 00:22:09.526 "enable_ktls": false 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "sock_impl_set_options", 00:22:09.526 "params": { 00:22:09.526 "impl_name": "posix", 00:22:09.526 "recv_buf_size": 2097152, 00:22:09.526 "send_buf_size": 2097152, 00:22:09.526 "enable_recv_pipe": true, 00:22:09.526 "enable_quickack": false, 00:22:09.526 "enable_placement_id": 0, 00:22:09.526 "enable_zerocopy_send_server": true, 00:22:09.526 "enable_zerocopy_send_client": false, 00:22:09.526 "zerocopy_threshold": 0, 00:22:09.526 "tls_version": 0, 00:22:09.526 "enable_ktls": false 00:22:09.526 } 00:22:09.526 } 00:22:09.526 ] 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "subsystem": "vmd", 00:22:09.526 "config": [] 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "subsystem": "accel", 00:22:09.526 "config": [ 00:22:09.526 { 00:22:09.526 "method": "accel_set_options", 00:22:09.526 "params": { 00:22:09.526 "small_cache_size": 128, 00:22:09.526 "large_cache_size": 16, 00:22:09.526 "task_count": 2048, 00:22:09.526 "sequence_count": 2048, 00:22:09.526 "buf_count": 2048 00:22:09.526 } 00:22:09.526 } 00:22:09.526 ] 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "subsystem": "bdev", 00:22:09.526 "config": [ 00:22:09.526 { 00:22:09.526 "method": "bdev_set_options", 00:22:09.526 "params": { 00:22:09.526 "bdev_io_pool_size": 65535, 00:22:09.526 "bdev_io_cache_size": 256, 00:22:09.526 "bdev_auto_examine": true, 00:22:09.526 "iobuf_small_cache_size": 128, 00:22:09.526 "iobuf_large_cache_size": 16 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "bdev_raid_set_options", 00:22:09.526 "params": { 00:22:09.526 "process_window_size_kb": 1024, 00:22:09.526 "process_max_bandwidth_mb_sec": 0 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "bdev_iscsi_set_options", 00:22:09.526 "params": { 00:22:09.526 "timeout_sec": 30 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "bdev_nvme_set_options", 00:22:09.526 "params": { 00:22:09.526 "action_on_timeout": "none", 00:22:09.526 "timeout_us": 0, 00:22:09.526 "timeout_admin_us": 0, 00:22:09.526 "keep_alive_timeout_ms": 10000, 00:22:09.526 "arbitration_burst": 0, 00:22:09.526 "low_priority_weight": 0, 00:22:09.526 "medium_priority_weight": 0, 00:22:09.526 "high_priority_weight": 0, 00:22:09.526 "nvme_adminq_poll_period_us": 10000, 00:22:09.526 "nvme_ioq_poll_period_us": 0, 00:22:09.526 "io_queue_requests": 0, 00:22:09.526 "delay_cmd_submit": true, 00:22:09.526 "transport_retry_count": 4, 00:22:09.526 "bdev_retry_count": 3, 00:22:09.526 "transport_ack_timeout": 0, 00:22:09.526 "ctrlr_loss_timeout_sec": 0, 00:22:09.526 "reconnect_delay_sec": 0, 00:22:09.526 "fast_io_fail_timeout_sec": 0, 00:22:09.526 "disable_auto_failback": false, 00:22:09.526 "generate_uuids": false, 00:22:09.526 "transport_tos": 0, 00:22:09.526 "nvme_error_stat": false, 00:22:09.526 "rdma_srq_size": 0, 00:22:09.526 "io_path_stat": false, 00:22:09.526 "allow_accel_sequence": false, 00:22:09.526 "rdma_max_cq_size": 0, 00:22:09.526 "rdma_cm_event_timeout_ms": 0, 00:22:09.526 "dhchap_digests": [ 00:22:09.526 "sha256", 00:22:09.526 "sha384", 00:22:09.526 "sha512" 00:22:09.526 ], 00:22:09.526 "dhchap_dhgroups": [ 00:22:09.526 "null", 00:22:09.526 "ffdhe2048", 00:22:09.526 "ffdhe3072", 00:22:09.526 "ffdhe4096", 00:22:09.526 "ffdhe6144", 00:22:09.526 "ffdhe8192" 00:22:09.526 ] 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "bdev_nvme_set_hotplug", 00:22:09.526 "params": { 00:22:09.526 "period_us": 100000, 00:22:09.526 "enable": false 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "bdev_malloc_create", 00:22:09.526 "params": { 00:22:09.526 "name": "malloc0", 00:22:09.526 "num_blocks": 8192, 00:22:09.526 "block_size": 4096, 00:22:09.526 "physical_block_size": 4096, 00:22:09.526 "uuid": "7bd3add4-3131-4967-aeb9-9b5a7caf3725", 00:22:09.526 "optimal_io_boundary": 0, 00:22:09.526 "md_size": 0, 00:22:09.526 "dif_type": 0, 00:22:09.526 "dif_is_head_of_md": false, 00:22:09.526 "dif_pi_format": 0 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "bdev_wait_for_examine" 00:22:09.526 } 00:22:09.526 ] 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "subsystem": "nbd", 00:22:09.526 "config": [] 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "subsystem": "scheduler", 00:22:09.526 "config": [ 00:22:09.526 { 00:22:09.526 "method": "framework_set_scheduler", 00:22:09.526 "params": { 00:22:09.526 "name": "static" 00:22:09.526 } 00:22:09.526 } 00:22:09.526 ] 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "subsystem": "nvmf", 00:22:09.526 "config": [ 00:22:09.526 { 00:22:09.526 "method": "nvmf_set_config", 00:22:09.526 "params": { 00:22:09.526 "discovery_filter": "match_any", 00:22:09.526 "admin_cmd_passthru": { 00:22:09.526 "identify_ctrlr": false 00:22:09.526 } 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "nvmf_set_max_subsystems", 00:22:09.526 "params": { 00:22:09.526 "max_subsystems": 1024 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "nvmf_set_crdt", 00:22:09.526 "params": { 00:22:09.526 "crdt1": 0, 00:22:09.526 "crdt2": 0, 00:22:09.526 "crdt3": 0 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "nvmf_create_transport", 00:22:09.526 "params": { 00:22:09.526 "trtype": "TCP", 00:22:09.526 "max_queue_depth": 128, 00:22:09.526 "max_io_qpairs_per_ctrlr": 127, 00:22:09.526 "in_capsule_data_size": 4096, 00:22:09.526 "max_io_size": 131072, 00:22:09.526 "io_unit_size": 131072, 00:22:09.526 "max_aq_depth": 128, 00:22:09.526 "num_shared_buffers": 511, 00:22:09.526 "buf_cache_size": 4294967295, 00:22:09.526 "dif_insert_or_strip": false, 00:22:09.526 "zcopy": false, 00:22:09.526 "c2h_success": false, 00:22:09.526 "sock_priority": 0, 00:22:09.526 "abort_timeout_sec": 1, 00:22:09.526 "ack_timeout": 0, 00:22:09.526 "data_wr_pool_size": 0 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "nvmf_create_subsystem", 00:22:09.526 "params": { 00:22:09.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.526 "allow_any_host": false, 00:22:09.526 "serial_number": "00000000000000000000", 00:22:09.526 "model_number": "SPDK bdev Controller", 00:22:09.526 "max_namespaces": 32, 00:22:09.526 "min_cntlid": 1, 00:22:09.526 "max_cntlid": 65519, 00:22:09.526 "ana_reporting": false 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "nvmf_subsystem_add_host", 00:22:09.526 "params": { 00:22:09.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.526 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.526 "psk": "key0" 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "nvmf_subsystem_add_ns", 00:22:09.526 "params": { 00:22:09.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.526 "namespace": { 00:22:09.526 "nsid": 1, 00:22:09.526 "bdev_name": "malloc0", 00:22:09.526 "nguid": "7BD3ADD431314967AEB99B5A7CAF3725", 00:22:09.526 "uuid": "7bd3add4-3131-4967-aeb9-9b5a7caf3725", 00:22:09.526 "no_auto_visible": false 00:22:09.526 } 00:22:09.526 } 00:22:09.526 }, 00:22:09.526 { 00:22:09.526 "method": "nvmf_subsystem_add_listener", 00:22:09.526 "params": { 00:22:09.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.526 "listen_address": { 00:22:09.526 "trtype": "TCP", 00:22:09.526 "adrfam": "IPv4", 00:22:09.526 "traddr": "10.0.0.2", 00:22:09.526 "trsvcid": "4420" 00:22:09.526 }, 00:22:09.526 "secure_channel": false, 00:22:09.526 "sock_impl": "ssl" 00:22:09.526 } 00:22:09.526 } 00:22:09.526 ] 00:22:09.526 } 00:22:09.526 ] 00:22:09.526 }' 00:22:09.527 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:09.786 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:09.786 "subsystems": [ 00:22:09.786 { 00:22:09.786 "subsystem": "keyring", 00:22:09.786 "config": [ 00:22:09.786 { 00:22:09.786 "method": "keyring_file_add_key", 00:22:09.786 "params": { 00:22:09.786 "name": "key0", 00:22:09.786 "path": "/tmp/tmp.oOeGLQ1ImM" 00:22:09.786 } 00:22:09.786 } 00:22:09.786 ] 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "subsystem": "iobuf", 00:22:09.786 "config": [ 00:22:09.786 { 00:22:09.786 "method": "iobuf_set_options", 00:22:09.786 "params": { 00:22:09.786 "small_pool_count": 8192, 00:22:09.786 "large_pool_count": 1024, 00:22:09.786 "small_bufsize": 8192, 00:22:09.786 "large_bufsize": 135168 00:22:09.786 } 00:22:09.786 } 00:22:09.786 ] 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "subsystem": "sock", 00:22:09.786 "config": [ 00:22:09.786 { 00:22:09.786 "method": "sock_set_default_impl", 00:22:09.786 "params": { 00:22:09.786 "impl_name": "posix" 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "sock_impl_set_options", 00:22:09.786 "params": { 00:22:09.786 "impl_name": "ssl", 00:22:09.786 "recv_buf_size": 4096, 00:22:09.786 "send_buf_size": 4096, 00:22:09.786 "enable_recv_pipe": true, 00:22:09.786 "enable_quickack": false, 00:22:09.786 "enable_placement_id": 0, 00:22:09.786 "enable_zerocopy_send_server": true, 00:22:09.786 "enable_zerocopy_send_client": false, 00:22:09.786 "zerocopy_threshold": 0, 00:22:09.786 "tls_version": 0, 00:22:09.786 "enable_ktls": false 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "sock_impl_set_options", 00:22:09.786 "params": { 00:22:09.786 "impl_name": "posix", 00:22:09.786 "recv_buf_size": 2097152, 00:22:09.786 "send_buf_size": 2097152, 00:22:09.786 "enable_recv_pipe": true, 00:22:09.786 "enable_quickack": false, 00:22:09.786 "enable_placement_id": 0, 00:22:09.786 "enable_zerocopy_send_server": true, 00:22:09.786 "enable_zerocopy_send_client": false, 00:22:09.786 "zerocopy_threshold": 0, 00:22:09.786 "tls_version": 0, 00:22:09.786 "enable_ktls": false 00:22:09.786 } 00:22:09.786 } 00:22:09.786 ] 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "subsystem": "vmd", 00:22:09.786 "config": [] 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "subsystem": "accel", 00:22:09.786 "config": [ 00:22:09.786 { 00:22:09.786 "method": "accel_set_options", 00:22:09.786 "params": { 00:22:09.786 "small_cache_size": 128, 00:22:09.786 "large_cache_size": 16, 00:22:09.786 "task_count": 2048, 00:22:09.786 "sequence_count": 2048, 00:22:09.786 "buf_count": 2048 00:22:09.786 } 00:22:09.786 } 00:22:09.786 ] 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "subsystem": "bdev", 00:22:09.786 "config": [ 00:22:09.786 { 00:22:09.786 "method": "bdev_set_options", 00:22:09.786 "params": { 00:22:09.786 "bdev_io_pool_size": 65535, 00:22:09.786 "bdev_io_cache_size": 256, 00:22:09.786 "bdev_auto_examine": true, 00:22:09.786 "iobuf_small_cache_size": 128, 00:22:09.786 "iobuf_large_cache_size": 16 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "bdev_raid_set_options", 00:22:09.786 "params": { 00:22:09.786 "process_window_size_kb": 1024, 00:22:09.786 "process_max_bandwidth_mb_sec": 0 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "bdev_iscsi_set_options", 00:22:09.786 "params": { 00:22:09.786 "timeout_sec": 30 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "bdev_nvme_set_options", 00:22:09.786 "params": { 00:22:09.786 "action_on_timeout": "none", 00:22:09.786 "timeout_us": 0, 00:22:09.786 "timeout_admin_us": 0, 00:22:09.786 "keep_alive_timeout_ms": 10000, 00:22:09.786 "arbitration_burst": 0, 00:22:09.786 "low_priority_weight": 0, 00:22:09.786 "medium_priority_weight": 0, 00:22:09.786 "high_priority_weight": 0, 00:22:09.786 "nvme_adminq_poll_period_us": 10000, 00:22:09.786 "nvme_ioq_poll_period_us": 0, 00:22:09.786 "io_queue_requests": 512, 00:22:09.786 "delay_cmd_submit": true, 00:22:09.786 "transport_retry_count": 4, 00:22:09.786 "bdev_retry_count": 3, 00:22:09.786 "transport_ack_timeout": 0, 00:22:09.786 "ctrlr_loss_timeout_sec": 0, 00:22:09.786 "reconnect_delay_sec": 0, 00:22:09.786 "fast_io_fail_timeout_sec": 0, 00:22:09.786 "disable_auto_failback": false, 00:22:09.786 "generate_uuids": false, 00:22:09.786 "transport_tos": 0, 00:22:09.786 "nvme_error_stat": false, 00:22:09.786 "rdma_srq_size": 0, 00:22:09.786 "io_path_stat": false, 00:22:09.786 "allow_accel_sequence": false, 00:22:09.786 "rdma_max_cq_size": 0, 00:22:09.786 "rdma_cm_event_timeout_ms": 0, 00:22:09.786 "dhchap_digests": [ 00:22:09.786 "sha256", 00:22:09.786 "sha384", 00:22:09.786 "sha512" 00:22:09.786 ], 00:22:09.786 "dhchap_dhgroups": [ 00:22:09.786 "null", 00:22:09.786 "ffdhe2048", 00:22:09.786 "ffdhe3072", 00:22:09.786 "ffdhe4096", 00:22:09.786 "ffdhe6144", 00:22:09.786 "ffdhe8192" 00:22:09.786 ] 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "bdev_nvme_attach_controller", 00:22:09.786 "params": { 00:22:09.786 "name": "nvme0", 00:22:09.786 "trtype": "TCP", 00:22:09.786 "adrfam": "IPv4", 00:22:09.786 "traddr": "10.0.0.2", 00:22:09.786 "trsvcid": "4420", 00:22:09.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.786 "prchk_reftag": false, 00:22:09.786 "prchk_guard": false, 00:22:09.786 "ctrlr_loss_timeout_sec": 0, 00:22:09.786 "reconnect_delay_sec": 0, 00:22:09.786 "fast_io_fail_timeout_sec": 0, 00:22:09.786 "psk": "key0", 00:22:09.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.786 "hdgst": false, 00:22:09.786 "ddgst": false 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "bdev_nvme_set_hotplug", 00:22:09.786 "params": { 00:22:09.786 "period_us": 100000, 00:22:09.786 "enable": false 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "bdev_enable_histogram", 00:22:09.786 "params": { 00:22:09.786 "name": "nvme0n1", 00:22:09.786 "enable": true 00:22:09.786 } 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "method": "bdev_wait_for_examine" 00:22:09.786 } 00:22:09.786 ] 00:22:09.786 }, 00:22:09.786 { 00:22:09.786 "subsystem": "nbd", 00:22:09.786 "config": [] 00:22:09.786 } 00:22:09.786 ] 00:22:09.786 }' 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 4178787 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4178787 ']' 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4178787 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4178787 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4178787' 00:22:09.787 killing process with pid 4178787 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4178787 00:22:09.787 Received shutdown signal, test time was about 1.000000 seconds 00:22:09.787 00:22:09.787 Latency(us) 00:22:09.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.787 =================================================================================================================== 00:22:09.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.787 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4178787 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 4178767 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4178767 ']' 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4178767 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4178767 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4178767' 00:22:10.045 killing process with pid 4178767 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4178767 00:22:10.045 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4178767 00:22:10.304 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:10.304 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.304 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.304 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.304 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:10.304 "subsystems": [ 00:22:10.304 { 00:22:10.304 "subsystem": "keyring", 00:22:10.304 "config": [ 00:22:10.304 { 00:22:10.304 "method": "keyring_file_add_key", 00:22:10.304 "params": { 00:22:10.304 "name": "key0", 00:22:10.304 "path": "/tmp/tmp.oOeGLQ1ImM" 00:22:10.304 } 00:22:10.304 } 00:22:10.304 ] 00:22:10.304 }, 00:22:10.304 { 00:22:10.304 "subsystem": "iobuf", 00:22:10.304 "config": [ 00:22:10.304 { 00:22:10.304 "method": "iobuf_set_options", 00:22:10.304 "params": { 00:22:10.304 "small_pool_count": 8192, 00:22:10.304 "large_pool_count": 1024, 00:22:10.304 "small_bufsize": 8192, 00:22:10.304 "large_bufsize": 135168 00:22:10.304 } 00:22:10.304 } 00:22:10.304 ] 00:22:10.304 }, 00:22:10.304 { 00:22:10.304 "subsystem": "sock", 00:22:10.304 "config": [ 00:22:10.304 { 00:22:10.304 "method": "sock_set_default_impl", 00:22:10.304 "params": { 00:22:10.304 "impl_name": "posix" 00:22:10.304 } 00:22:10.304 }, 00:22:10.304 { 00:22:10.304 "method": "sock_impl_set_options", 00:22:10.304 "params": { 00:22:10.304 "impl_name": "ssl", 00:22:10.304 "recv_buf_size": 4096, 00:22:10.304 "send_buf_size": 4096, 00:22:10.304 "enable_recv_pipe": true, 00:22:10.304 "enable_quickack": false, 00:22:10.304 "enable_placement_id": 0, 00:22:10.304 "enable_zerocopy_send_server": true, 00:22:10.304 "enable_zerocopy_send_client": false, 00:22:10.304 "zerocopy_threshold": 0, 00:22:10.304 "tls_version": 0, 00:22:10.304 "enable_ktls": false 00:22:10.304 } 00:22:10.304 }, 00:22:10.304 { 00:22:10.304 "method": "sock_impl_set_options", 00:22:10.304 "params": { 00:22:10.304 "impl_name": "posix", 00:22:10.304 "recv_buf_size": 2097152, 00:22:10.304 "send_buf_size": 2097152, 00:22:10.304 "enable_recv_pipe": true, 00:22:10.304 "enable_quickack": false, 00:22:10.304 "enable_placement_id": 0, 00:22:10.304 "enable_zerocopy_send_server": true, 00:22:10.304 "enable_zerocopy_send_client": false, 00:22:10.304 "zerocopy_threshold": 0, 00:22:10.304 "tls_version": 0, 00:22:10.304 "enable_ktls": false 00:22:10.304 } 00:22:10.304 } 00:22:10.304 ] 00:22:10.304 }, 00:22:10.304 { 00:22:10.304 "subsystem": "vmd", 00:22:10.304 "config": [] 00:22:10.304 }, 00:22:10.304 { 00:22:10.304 "subsystem": "accel", 00:22:10.304 "config": [ 00:22:10.304 { 00:22:10.304 "method": "accel_set_options", 00:22:10.304 "params": { 00:22:10.304 "small_cache_size": 128, 00:22:10.304 "large_cache_size": 16, 00:22:10.304 "task_count": 2048, 00:22:10.304 "sequence_count": 2048, 00:22:10.304 "buf_count": 2048 00:22:10.304 } 00:22:10.304 } 00:22:10.304 ] 00:22:10.304 }, 00:22:10.304 { 00:22:10.304 "subsystem": "bdev", 00:22:10.305 "config": [ 00:22:10.305 { 00:22:10.305 "method": "bdev_set_options", 00:22:10.305 "params": { 00:22:10.305 "bdev_io_pool_size": 65535, 00:22:10.305 "bdev_io_cache_size": 256, 00:22:10.305 "bdev_auto_examine": true, 00:22:10.305 "iobuf_small_cache_size": 128, 00:22:10.305 "iobuf_large_cache_size": 16 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "bdev_raid_set_options", 00:22:10.305 "params": { 00:22:10.305 "process_window_size_kb": 1024, 00:22:10.305 "process_max_bandwidth_mb_sec": 0 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "bdev_iscsi_set_options", 00:22:10.305 "params": { 00:22:10.305 "timeout_sec": 30 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "bdev_nvme_set_options", 00:22:10.305 "params": { 00:22:10.305 "action_on_timeout": "none", 00:22:10.305 "timeout_us": 0, 00:22:10.305 "timeout_admin_us": 0, 00:22:10.305 "keep_alive_timeout_ms": 10000, 00:22:10.305 "arbitration_burst": 0, 00:22:10.305 "low_priority_weight": 0, 00:22:10.305 "medium_priority_weight": 0, 00:22:10.305 "high_priority_weight": 0, 00:22:10.305 "nvme_adminq_poll_period_us": 10000, 00:22:10.305 "nvme_ioq_poll_period_us": 0, 00:22:10.305 "io_queue_requests": 0, 00:22:10.305 "delay_cmd_submit": true, 00:22:10.305 "transport_retry_count": 4, 00:22:10.305 "bdev_retry_count": 3, 00:22:10.305 "transport_ack_timeout": 0, 00:22:10.305 "ctrlr_loss_timeout_sec": 0, 00:22:10.305 "reconnect_delay_sec": 0, 00:22:10.305 "fast_io_fail_timeout_sec": 0, 00:22:10.305 "disable_auto_failback": false, 00:22:10.305 "generate_uuids": false, 00:22:10.305 "transport_tos": 0, 00:22:10.305 "nvme_error_stat": false, 00:22:10.305 "rdma_srq_size": 0, 00:22:10.305 "io_path_stat": false, 00:22:10.305 "allow_accel_sequence": false, 00:22:10.305 "rdma_max_cq_size": 0, 00:22:10.305 "rdma_cm_event_timeout_ms": 0, 00:22:10.305 "dhchap_digests": [ 00:22:10.305 "sha256", 00:22:10.305 "sha384", 00:22:10.305 "sha512" 00:22:10.305 ], 00:22:10.305 "dhchap_dhgroups": [ 00:22:10.305 "null", 00:22:10.305 "ffdhe2048", 00:22:10.305 "ffdhe3072", 00:22:10.305 "ffdhe4096", 00:22:10.305 "ffdhe6144", 00:22:10.305 "ffdhe8192" 00:22:10.305 ] 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "bdev_nvme_set_hotplug", 00:22:10.305 "params": { 00:22:10.305 "period_us": 100000, 00:22:10.305 "enable": false 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "bdev_malloc_create", 00:22:10.305 "params": { 00:22:10.305 "name": "malloc0", 00:22:10.305 "num_blocks": 8192, 00:22:10.305 "block_size": 4096, 00:22:10.305 "physical_block_size": 4096, 00:22:10.305 "uuid": "7bd3add4-3131-4967-aeb9-9b5a7caf3725", 00:22:10.305 "optimal_io_boundary": 0, 00:22:10.305 "md_size": 0, 00:22:10.305 "dif_type": 0, 00:22:10.305 "dif_is_head_of_md": false, 00:22:10.305 "dif_pi_format": 0 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "bdev_wait_for_examine" 00:22:10.305 } 00:22:10.305 ] 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "subsystem": "nbd", 00:22:10.305 "config": [] 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "subsystem": "scheduler", 00:22:10.305 "config": [ 00:22:10.305 { 00:22:10.305 "method": "framework_set_scheduler", 00:22:10.305 "params": { 00:22:10.305 "name": "static" 00:22:10.305 } 00:22:10.305 } 00:22:10.305 ] 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "subsystem": "nvmf", 00:22:10.305 "config": [ 00:22:10.305 { 00:22:10.305 "method": "nvmf_set_config", 00:22:10.305 "params": { 00:22:10.305 "discovery_filter": "match_any", 00:22:10.305 "admin_cmd_passthru": { 00:22:10.305 "identify_ctrlr": false 00:22:10.305 } 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "nvmf_set_max_subsystems", 00:22:10.305 "params": { 00:22:10.305 "max_subsystems": 1024 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "nvmf_set_crdt", 00:22:10.305 "params": { 00:22:10.305 "crdt1": 0, 00:22:10.305 "crdt2": 0, 00:22:10.305 "crdt3": 0 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "nvmf_create_transport", 00:22:10.305 "params": { 00:22:10.305 "trtype": "TCP", 00:22:10.305 "max_queue_depth": 128, 00:22:10.305 "max_io_qpairs_per_ctrlr": 127, 00:22:10.305 "in_capsule_data_size": 4096, 00:22:10.305 "max_io_size": 131072, 00:22:10.305 "io_unit_size": 131072, 00:22:10.305 "max_aq_depth": 128, 00:22:10.305 "num_shared_buffers": 511, 00:22:10.305 "buf_cache_size": 4294967295, 00:22:10.305 "dif_insert_or_strip": false, 00:22:10.305 "zcopy": false, 00:22:10.305 "c2h_success": false, 00:22:10.305 "sock_priority": 0, 00:22:10.305 "abort_timeout_sec": 1, 00:22:10.305 "ack_timeout": 0, 00:22:10.305 "data_wr_pool_size": 0 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "nvmf_create_subsystem", 00:22:10.305 "params": { 00:22:10.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.305 "allow_any_host": false, 00:22:10.305 "serial_number": "00000000000000000000", 00:22:10.305 "model_number": "SPDK bdev Controller", 00:22:10.305 "max_namespaces": 32, 00:22:10.305 "min_cntlid": 1, 00:22:10.305 "max_cntlid": 65519, 00:22:10.305 "ana_reporting": false 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "nvmf_subsystem_add_host", 00:22:10.305 "params": { 00:22:10.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.305 "host": "nqn.2016-06.io.spdk:host1", 00:22:10.305 "psk": "key0" 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "nvmf_subsystem_add_ns", 00:22:10.305 "params": { 00:22:10.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.305 "namespace": { 00:22:10.305 "nsid": 1, 00:22:10.305 "bdev_name": "malloc0", 00:22:10.305 "nguid": "7BD3ADD431314967AEB99B5A7CAF3725", 00:22:10.305 "uuid": "7bd3add4-3131-4967-aeb9-9b5a7caf3725", 00:22:10.305 "no_auto_visible": false 00:22:10.305 } 00:22:10.305 } 00:22:10.305 }, 00:22:10.305 { 00:22:10.305 "method": "nvmf_subsystem_add_listener", 00:22:10.305 "params": { 00:22:10.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.305 "listen_address": { 00:22:10.305 "trtype": "TCP", 00:22:10.305 "adrfam": "IPv4", 00:22:10.305 "traddr": "10.0.0.2", 00:22:10.305 "trsvcid": "4420" 00:22:10.305 }, 00:22:10.305 "secure_channel": false, 00:22:10.305 "sock_impl": "ssl" 00:22:10.305 } 00:22:10.305 } 00:22:10.305 ] 00:22:10.305 } 00:22:10.305 ] 00:22:10.305 }' 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4179528 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4179528 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4179528 ']' 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.305 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.305 [2024-07-25 12:08:47.524992] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:10.305 [2024-07-25 12:08:47.525052] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.305 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.565 [2024-07-25 12:08:47.610111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.565 [2024-07-25 12:08:47.700426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.565 [2024-07-25 12:08:47.700474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.565 [2024-07-25 12:08:47.700486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.565 [2024-07-25 12:08:47.700495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.565 [2024-07-25 12:08:47.700503] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.565 [2024-07-25 12:08:47.700558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.824 [2024-07-25 12:08:47.920089] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.824 [2024-07-25 12:08:47.962048] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.824 [2024-07-25 12:08:47.962261] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=4179614 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 4179614 /var/tmp/bdevperf.sock 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 4179614 ']' 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.392 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.393 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:11.393 "subsystems": [ 00:22:11.393 { 00:22:11.393 "subsystem": "keyring", 00:22:11.393 "config": [ 00:22:11.393 { 00:22:11.393 "method": "keyring_file_add_key", 00:22:11.393 "params": { 00:22:11.393 "name": "key0", 00:22:11.393 "path": "/tmp/tmp.oOeGLQ1ImM" 00:22:11.393 } 00:22:11.393 } 00:22:11.393 ] 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "subsystem": "iobuf", 00:22:11.393 "config": [ 00:22:11.393 { 00:22:11.393 "method": "iobuf_set_options", 00:22:11.393 "params": { 00:22:11.393 "small_pool_count": 8192, 00:22:11.393 "large_pool_count": 1024, 00:22:11.393 "small_bufsize": 8192, 00:22:11.393 "large_bufsize": 135168 00:22:11.393 } 00:22:11.393 } 00:22:11.393 ] 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "subsystem": "sock", 00:22:11.393 "config": [ 00:22:11.393 { 00:22:11.393 "method": "sock_set_default_impl", 00:22:11.393 "params": { 00:22:11.393 "impl_name": "posix" 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "sock_impl_set_options", 00:22:11.393 "params": { 00:22:11.393 "impl_name": "ssl", 00:22:11.393 "recv_buf_size": 4096, 00:22:11.393 "send_buf_size": 4096, 00:22:11.393 "enable_recv_pipe": true, 00:22:11.393 "enable_quickack": false, 00:22:11.393 "enable_placement_id": 0, 00:22:11.393 "enable_zerocopy_send_server": true, 00:22:11.393 "enable_zerocopy_send_client": false, 00:22:11.393 "zerocopy_threshold": 0, 00:22:11.393 "tls_version": 0, 00:22:11.393 "enable_ktls": false 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "sock_impl_set_options", 00:22:11.393 "params": { 00:22:11.393 "impl_name": "posix", 00:22:11.393 "recv_buf_size": 2097152, 00:22:11.393 "send_buf_size": 2097152, 00:22:11.393 "enable_recv_pipe": true, 00:22:11.393 "enable_quickack": false, 00:22:11.393 "enable_placement_id": 0, 00:22:11.393 "enable_zerocopy_send_server": true, 00:22:11.393 "enable_zerocopy_send_client": false, 00:22:11.393 "zerocopy_threshold": 0, 00:22:11.393 "tls_version": 0, 00:22:11.393 "enable_ktls": false 00:22:11.393 } 00:22:11.393 } 00:22:11.393 ] 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "subsystem": "vmd", 00:22:11.393 "config": [] 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "subsystem": "accel", 00:22:11.393 "config": [ 00:22:11.393 { 00:22:11.393 "method": "accel_set_options", 00:22:11.393 "params": { 00:22:11.393 "small_cache_size": 128, 00:22:11.393 "large_cache_size": 16, 00:22:11.393 "task_count": 2048, 00:22:11.393 "sequence_count": 2048, 00:22:11.393 "buf_count": 2048 00:22:11.393 } 00:22:11.393 } 00:22:11.393 ] 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "subsystem": "bdev", 00:22:11.393 "config": [ 00:22:11.393 { 00:22:11.393 "method": "bdev_set_options", 00:22:11.393 "params": { 00:22:11.393 "bdev_io_pool_size": 65535, 00:22:11.393 "bdev_io_cache_size": 256, 00:22:11.393 "bdev_auto_examine": true, 00:22:11.393 "iobuf_small_cache_size": 128, 00:22:11.393 "iobuf_large_cache_size": 16 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "bdev_raid_set_options", 00:22:11.393 "params": { 00:22:11.393 "process_window_size_kb": 1024, 00:22:11.393 "process_max_bandwidth_mb_sec": 0 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "bdev_iscsi_set_options", 00:22:11.393 "params": { 00:22:11.393 "timeout_sec": 30 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "bdev_nvme_set_options", 00:22:11.393 "params": { 00:22:11.393 "action_on_timeout": "none", 00:22:11.393 "timeout_us": 0, 00:22:11.393 "timeout_admin_us": 0, 00:22:11.393 "keep_alive_timeout_ms": 10000, 00:22:11.393 "arbitration_burst": 0, 00:22:11.393 "low_priority_weight": 0, 00:22:11.393 "medium_priority_weight": 0, 00:22:11.393 "high_priority_weight": 0, 00:22:11.393 "nvme_adminq_poll_period_us": 10000, 00:22:11.393 "nvme_ioq_poll_period_us": 0, 00:22:11.393 "io_queue_requests": 512, 00:22:11.393 "delay_cmd_submit": true, 00:22:11.393 "transport_retry_count": 4, 00:22:11.393 "bdev_retry_count": 3, 00:22:11.393 "transport_ack_timeout": 0, 00:22:11.393 "ctrlr_loss_timeout_sec": 0, 00:22:11.393 "reconnect_delay_sec": 0, 00:22:11.393 "fast_io_fail_timeout_sec": 0, 00:22:11.393 "disable_auto_failback": false, 00:22:11.393 "generate_uuids": false, 00:22:11.393 "transport_tos": 0, 00:22:11.393 "nvme_error_stat": false, 00:22:11.393 "rdma_srq_size": 0, 00:22:11.393 "io_path_stat": false, 00:22:11.393 "allow_accel_sequence": false, 00:22:11.393 "rdma_max_cq_size": 0, 00:22:11.393 "rdma_cm_event_timeout_ms": 0, 00:22:11.393 "dhchap_digests": [ 00:22:11.393 "sha256", 00:22:11.393 "sha384", 00:22:11.393 "sha512" 00:22:11.393 ], 00:22:11.393 "dhchap_dhgroups": [ 00:22:11.393 "null", 00:22:11.393 "ffdhe2048", 00:22:11.393 "ffdhe3072", 00:22:11.393 "ffdhe4096", 00:22:11.393 "ffdhe6144", 00:22:11.393 "ffdhe8192" 00:22:11.393 ] 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "bdev_nvme_attach_controller", 00:22:11.393 "params": { 00:22:11.393 "name": "nvme0", 00:22:11.393 "trtype": "TCP", 00:22:11.393 "adrfam": "IPv4", 00:22:11.393 "traddr": "10.0.0.2", 00:22:11.393 "trsvcid": "4420", 00:22:11.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.393 "prchk_reftag": false, 00:22:11.393 "prchk_guard": false, 00:22:11.393 "ctrlr_loss_timeout_sec": 0, 00:22:11.393 "reconnect_delay_sec": 0, 00:22:11.393 "fast_io_fail_timeout_sec": 0, 00:22:11.393 "psk": "key0", 00:22:11.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:11.393 "hdgst": false, 00:22:11.393 "ddgst": false 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "bdev_nvme_set_hotplug", 00:22:11.393 "params": { 00:22:11.393 "period_us": 100000, 00:22:11.393 "enable": false 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "bdev_enable_histogram", 00:22:11.393 "params": { 00:22:11.393 "name": "nvme0n1", 00:22:11.393 "enable": true 00:22:11.393 } 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "method": "bdev_wait_for_examine" 00:22:11.393 } 00:22:11.393 ] 00:22:11.393 }, 00:22:11.393 { 00:22:11.393 "subsystem": "nbd", 00:22:11.393 "config": [] 00:22:11.393 } 00:22:11.393 ] 00:22:11.393 }' 00:22:11.393 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.393 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.393 [2024-07-25 12:08:48.548457] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:11.393 [2024-07-25 12:08:48.548517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179614 ] 00:22:11.393 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.393 [2024-07-25 12:08:48.631799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.654 [2024-07-25 12:08:48.736744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.654 [2024-07-25 12:08:48.901445] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.251 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.251 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:12.251 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:12.251 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.509 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.509 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.766 Running I/O for 1 seconds... 00:22:13.699 00:22:13.699 Latency(us) 00:22:13.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.699 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:13.699 Verification LBA range: start 0x0 length 0x2000 00:22:13.699 nvme0n1 : 1.02 3572.31 13.95 0.00 0.00 35430.62 9711.24 56241.80 00:22:13.699 =================================================================================================================== 00:22:13.699 Total : 3572.31 13.95 0.00 0.00 35430.62 9711.24 56241.80 00:22:13.699 0 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:13.699 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:13.699 nvmf_trace.0 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 4179614 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4179614 ']' 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4179614 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4179614 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4179614' 00:22:13.957 killing process with pid 4179614 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4179614 00:22:13.957 Received shutdown signal, test time was about 1.000000 seconds 00:22:13.957 00:22:13.957 Latency(us) 00:22:13.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.957 =================================================================================================================== 00:22:13.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.957 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4179614 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.215 rmmod nvme_tcp 00:22:14.215 rmmod nvme_fabrics 00:22:14.215 rmmod nvme_keyring 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4179528 ']' 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4179528 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 4179528 ']' 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 4179528 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4179528 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4179528' 00:22:14.215 killing process with pid 4179528 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 4179528 00:22:14.215 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 4179528 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.474 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9khDCbEuxF /tmp/tmp.RJVqLw1EkN /tmp/tmp.oOeGLQ1ImM 00:22:17.004 00:22:17.004 real 1m33.406s 00:22:17.004 user 2m32.353s 00:22:17.004 sys 0m27.185s 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.004 ************************************ 00:22:17.004 END TEST nvmf_tls 00:22:17.004 ************************************ 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.004 ************************************ 00:22:17.004 START TEST nvmf_fips 00:22:17.004 ************************************ 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:17.004 * Looking for test storage... 00:22:17.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.004 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:17.005 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:17.006 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:17.006 Error setting digest 00:22:17.006 00C2DD1CF17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:17.006 00C2DD1CF17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.006 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:23.567 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:23.567 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:23.567 Found net devices under 0000:af:00.0: cvl_0_0 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:23.567 Found net devices under 0000:af:00.1: cvl_0_1 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:23.567 00:22:23.567 --- 10.0.0.2 ping statistics --- 00:22:23.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.567 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:22:23.567 00:22:23.567 --- 10.0.0.1 ping statistics --- 00:22:23.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.567 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:23.567 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4183892 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4183892 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 4183892 ']' 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.568 12:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.568 [2024-07-25 12:09:00.035033] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:23.568 [2024-07-25 12:09:00.035087] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.568 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.568 [2024-07-25 12:09:00.109920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.568 [2024-07-25 12:09:00.214973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.568 [2024-07-25 12:09:00.215027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.568 [2024-07-25 12:09:00.215040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.568 [2024-07-25 12:09:00.215052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.568 [2024-07-25 12:09:00.215063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.568 [2024-07-25 12:09:00.215091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.826 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:24.085 [2024-07-25 12:09:01.172491] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.085 [2024-07-25 12:09:01.188475] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.085 [2024-07-25 12:09:01.188729] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.085 [2024-07-25 12:09:01.218815] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:24.085 malloc0 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4184239 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4184239 /var/tmp/bdevperf.sock 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 4184239 ']' 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.085 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:24.085 [2024-07-25 12:09:01.324269] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:24.085 [2024-07-25 12:09:01.324332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4184239 ] 00:22:24.085 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.344 [2024-07-25 12:09:01.436809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.344 [2024-07-25 12:09:01.584773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.280 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.280 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:25.280 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:25.280 [2024-07-25 12:09:02.469281] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.280 [2024-07-25 12:09:02.469451] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:25.280 TLSTESTn1 00:22:25.538 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:25.538 Running I/O for 10 seconds... 00:22:35.508 00:22:35.508 Latency(us) 00:22:35.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.508 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.508 Verification LBA range: start 0x0 length 0x2000 00:22:35.508 TLSTESTn1 : 10.02 2861.71 11.18 0.00 0.00 44611.27 9532.51 55050.24 00:22:35.508 =================================================================================================================== 00:22:35.508 Total : 2861.71 11.18 0.00 0.00 44611.27 9532.51 55050.24 00:22:35.508 0 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:35.508 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:35.508 nvmf_trace.0 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4184239 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 4184239 ']' 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 4184239 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4184239 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:35.768 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4184239' 00:22:35.769 killing process with pid 4184239 00:22:35.769 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 4184239 00:22:35.769 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.769 00:22:35.769 Latency(us) 00:22:35.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.769 =================================================================================================================== 00:22:35.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.769 [2024-07-25 12:09:12.919322] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:35.769 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 4184239 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.027 rmmod nvme_tcp 00:22:36.027 rmmod nvme_fabrics 00:22:36.027 rmmod nvme_keyring 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4183892 ']' 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4183892 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 4183892 ']' 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 4183892 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4183892 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4183892' 00:22:36.027 killing process with pid 4183892 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 4183892 00:22:36.027 [2024-07-25 12:09:13.313388] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:36.027 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 4183892 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.286 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:38.820 00:22:38.820 real 0m21.851s 00:22:38.820 user 0m24.890s 00:22:38.820 sys 0m8.505s 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.820 ************************************ 00:22:38.820 END TEST nvmf_fips 00:22:38.820 ************************************ 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.820 12:09:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:44.092 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:44.092 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:44.092 Found net devices under 0000:af:00.0: cvl_0_0 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:44.092 Found net devices under 0000:af:00.1: cvl_0_1 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.092 ************************************ 00:22:44.092 START TEST nvmf_perf_adq 00:22:44.092 ************************************ 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:44.092 * Looking for test storage... 00:22:44.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.092 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.093 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.656 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:50.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:50.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:50.657 Found net devices under 0000:af:00.0: cvl_0_0 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:50.657 Found net devices under 0000:af:00.1: cvl_0_1 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:50.657 12:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:50.915 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:52.833 12:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.100 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:58.101 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:58.101 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:58.101 Found net devices under 0000:af:00.0: cvl_0_0 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:58.101 Found net devices under 0000:af:00.1: cvl_0_1 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:22:58.101 00:22:58.101 --- 10.0.0.2 ping statistics --- 00:22:58.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.101 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:22:58.101 00:22:58.101 --- 10.0.0.1 ping statistics --- 00:22:58.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.101 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1247 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1247 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1247 ']' 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.101 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.359 [2024-07-25 12:09:35.443748] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:58.359 [2024-07-25 12:09:35.443807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.359 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.359 [2024-07-25 12:09:35.529778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.359 [2024-07-25 12:09:35.623748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.359 [2024-07-25 12:09:35.623802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.359 [2024-07-25 12:09:35.623813] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.359 [2024-07-25 12:09:35.623821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.359 [2024-07-25 12:09:35.623829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.359 [2024-07-25 12:09:35.623887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.359 [2024-07-25 12:09:35.624001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.359 [2024-07-25 12:09:35.624110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.359 [2024-07-25 12:09:35.624110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.305 [2024-07-25 12:09:36.590640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.305 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.564 Malloc1 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.564 [2024-07-25 12:09:36.650583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1587 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:59.564 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:59.564 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:01.471 "tick_rate": 2200000000, 00:23:01.471 "poll_groups": [ 00:23:01.471 { 00:23:01.471 "name": "nvmf_tgt_poll_group_000", 00:23:01.471 "admin_qpairs": 1, 00:23:01.471 "io_qpairs": 1, 00:23:01.471 "current_admin_qpairs": 1, 00:23:01.471 "current_io_qpairs": 1, 00:23:01.471 "pending_bdev_io": 0, 00:23:01.471 "completed_nvme_io": 11713, 00:23:01.471 "transports": [ 00:23:01.471 { 00:23:01.471 "trtype": "TCP" 00:23:01.471 } 00:23:01.471 ] 00:23:01.471 }, 00:23:01.471 { 00:23:01.471 "name": "nvmf_tgt_poll_group_001", 00:23:01.471 "admin_qpairs": 0, 00:23:01.471 "io_qpairs": 1, 00:23:01.471 "current_admin_qpairs": 0, 00:23:01.471 "current_io_qpairs": 1, 00:23:01.471 "pending_bdev_io": 0, 00:23:01.471 "completed_nvme_io": 8050, 00:23:01.471 "transports": [ 00:23:01.471 { 00:23:01.471 "trtype": "TCP" 00:23:01.471 } 00:23:01.471 ] 00:23:01.471 }, 00:23:01.471 { 00:23:01.471 "name": "nvmf_tgt_poll_group_002", 00:23:01.471 "admin_qpairs": 0, 00:23:01.471 "io_qpairs": 1, 00:23:01.471 "current_admin_qpairs": 0, 00:23:01.471 "current_io_qpairs": 1, 00:23:01.471 "pending_bdev_io": 0, 00:23:01.471 "completed_nvme_io": 8129, 00:23:01.471 "transports": [ 00:23:01.471 { 00:23:01.471 "trtype": "TCP" 00:23:01.471 } 00:23:01.471 ] 00:23:01.471 }, 00:23:01.471 { 00:23:01.471 "name": "nvmf_tgt_poll_group_003", 00:23:01.471 "admin_qpairs": 0, 00:23:01.471 "io_qpairs": 1, 00:23:01.471 "current_admin_qpairs": 0, 00:23:01.471 "current_io_qpairs": 1, 00:23:01.471 "pending_bdev_io": 0, 00:23:01.471 "completed_nvme_io": 13795, 00:23:01.471 "transports": [ 00:23:01.471 { 00:23:01.471 "trtype": "TCP" 00:23:01.471 } 00:23:01.471 ] 00:23:01.471 } 00:23:01.471 ] 00:23:01.471 }' 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:01.471 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1587 00:23:09.719 Initializing NVMe Controllers 00:23:09.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:09.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:09.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:09.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:09.719 Initialization complete. Launching workers. 00:23:09.719 ======================================================== 00:23:09.719 Latency(us) 00:23:09.719 Device Information : IOPS MiB/s Average min max 00:23:09.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4344.05 16.97 14746.28 9660.19 21048.09 00:23:09.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4307.65 16.83 14868.72 5509.95 25168.28 00:23:09.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7267.38 28.39 8815.24 3636.47 43987.61 00:23:09.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6222.44 24.31 10290.79 3280.32 17855.96 00:23:09.719 ======================================================== 00:23:09.719 Total : 22141.52 86.49 11571.26 3280.32 43987.61 00:23:09.719 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.719 rmmod nvme_tcp 00:23:09.719 rmmod nvme_fabrics 00:23:09.719 rmmod nvme_keyring 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1247 ']' 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1247 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1247 ']' 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1247 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1247 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1247' 00:23:09.719 killing process with pid 1247 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1247 00:23:09.719 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1247 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.978 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.514 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.514 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:12.514 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:13.452 12:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:15.357 12:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:20.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:20.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:20.629 Found net devices under 0000:af:00.0: cvl_0_0 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.629 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:20.630 Found net devices under 0000:af:00.1: cvl_0_1 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:23:20.630 00:23:20.630 --- 10.0.0.2 ping statistics --- 00:23:20.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.630 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:23:20.630 00:23:20.630 --- 10.0.0.1 ping statistics --- 00:23:20.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.630 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:20.630 net.core.busy_poll = 1 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:20.630 net.core.busy_read = 1 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:20.630 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=5813 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 5813 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 5813 ']' 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.929 12:09:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.929 [2024-07-25 12:09:58.146894] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:23:20.929 [2024-07-25 12:09:58.146956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.929 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.189 [2024-07-25 12:09:58.234397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.189 [2024-07-25 12:09:58.328388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.189 [2024-07-25 12:09:58.328432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.189 [2024-07-25 12:09:58.328442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.189 [2024-07-25 12:09:58.328451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.189 [2024-07-25 12:09:58.328458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.189 [2024-07-25 12:09:58.328508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.189 [2024-07-25 12:09:58.328638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.189 [2024-07-25 12:09:58.328694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.189 [2024-07-25 12:09:58.328694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:22.125 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 [2024-07-25 12:09:59.286665] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 Malloc1 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.126 [2024-07-25 12:09:59.342361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=6022 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:22.126 12:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:22.126 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:24.659 "tick_rate": 2200000000, 00:23:24.659 "poll_groups": [ 00:23:24.659 { 00:23:24.659 "name": "nvmf_tgt_poll_group_000", 00:23:24.659 "admin_qpairs": 1, 00:23:24.659 "io_qpairs": 2, 00:23:24.659 "current_admin_qpairs": 1, 00:23:24.659 "current_io_qpairs": 2, 00:23:24.659 "pending_bdev_io": 0, 00:23:24.659 "completed_nvme_io": 16460, 00:23:24.659 "transports": [ 00:23:24.659 { 00:23:24.659 "trtype": "TCP" 00:23:24.659 } 00:23:24.659 ] 00:23:24.659 }, 00:23:24.659 { 00:23:24.659 "name": "nvmf_tgt_poll_group_001", 00:23:24.659 "admin_qpairs": 0, 00:23:24.659 "io_qpairs": 2, 00:23:24.659 "current_admin_qpairs": 0, 00:23:24.659 "current_io_qpairs": 2, 00:23:24.659 "pending_bdev_io": 0, 00:23:24.659 "completed_nvme_io": 10460, 00:23:24.659 "transports": [ 00:23:24.659 { 00:23:24.659 "trtype": "TCP" 00:23:24.659 } 00:23:24.659 ] 00:23:24.659 }, 00:23:24.659 { 00:23:24.659 "name": "nvmf_tgt_poll_group_002", 00:23:24.659 "admin_qpairs": 0, 00:23:24.659 "io_qpairs": 0, 00:23:24.659 "current_admin_qpairs": 0, 00:23:24.659 "current_io_qpairs": 0, 00:23:24.659 "pending_bdev_io": 0, 00:23:24.659 "completed_nvme_io": 0, 00:23:24.659 "transports": [ 00:23:24.659 { 00:23:24.659 "trtype": "TCP" 00:23:24.659 } 00:23:24.659 ] 00:23:24.659 }, 00:23:24.659 { 00:23:24.659 "name": "nvmf_tgt_poll_group_003", 00:23:24.659 "admin_qpairs": 0, 00:23:24.659 "io_qpairs": 0, 00:23:24.659 "current_admin_qpairs": 0, 00:23:24.659 "current_io_qpairs": 0, 00:23:24.659 "pending_bdev_io": 0, 00:23:24.659 "completed_nvme_io": 0, 00:23:24.659 "transports": [ 00:23:24.659 { 00:23:24.659 "trtype": "TCP" 00:23:24.659 } 00:23:24.659 ] 00:23:24.659 } 00:23:24.659 ] 00:23:24.659 }' 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:24.659 12:10:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 6022 00:23:32.774 Initializing NVMe Controllers 00:23:32.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:32.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:32.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:32.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:32.774 Initialization complete. Launching workers. 00:23:32.774 ======================================================== 00:23:32.775 Latency(us) 00:23:32.775 Device Information : IOPS MiB/s Average min max 00:23:32.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 2704.00 10.56 23686.03 5759.39 73343.09 00:23:32.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 2879.80 11.25 22246.14 4111.21 72769.96 00:23:32.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4400.00 17.19 14558.93 2426.51 62585.62 00:23:32.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4161.50 16.26 15438.17 2355.17 61430.21 00:23:32.775 ======================================================== 00:23:32.775 Total : 14145.30 55.26 18127.34 2355.17 73343.09 00:23:32.775 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.775 rmmod nvme_tcp 00:23:32.775 rmmod nvme_fabrics 00:23:32.775 rmmod nvme_keyring 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 5813 ']' 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 5813 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 5813 ']' 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 5813 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 5813 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 5813' 00:23:32.775 killing process with pid 5813 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 5813 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 5813 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.775 12:10:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.676 12:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:34.676 12:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:34.676 00:23:34.676 real 0m50.723s 00:23:34.676 user 2m51.595s 00:23:34.676 sys 0m9.476s 00:23:34.676 12:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.676 12:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:34.676 ************************************ 00:23:34.676 END TEST nvmf_perf_adq 00:23:34.676 ************************************ 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:34.935 ************************************ 00:23:34.935 START TEST nvmf_shutdown 00:23:34.935 ************************************ 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:34.935 * Looking for test storage... 00:23:34.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.935 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.936 ************************************ 00:23:34.936 START TEST nvmf_shutdown_tc1 00:23:34.936 ************************************ 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.936 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.504 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:41.505 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:41.505 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:41.505 Found net devices under 0000:af:00.0: cvl_0_0 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:41.505 Found net devices under 0000:af:00.1: cvl_0_1 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.505 12:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:23:41.505 00:23:41.505 --- 10.0.0.2 ping statistics --- 00:23:41.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.505 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:23:41.505 00:23:41.505 --- 10.0.0.1 ping statistics --- 00:23:41.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.505 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.505 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=11667 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 11667 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 11667 ']' 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.506 12:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.506 [2024-07-25 12:10:18.192862] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:23:41.506 [2024-07-25 12:10:18.192928] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.506 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.506 [2024-07-25 12:10:18.282410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.506 [2024-07-25 12:10:18.389897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.506 [2024-07-25 12:10:18.389945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.506 [2024-07-25 12:10:18.389958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.506 [2024-07-25 12:10:18.389969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.506 [2024-07-25 12:10:18.389978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.506 [2024-07-25 12:10:18.390046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.506 [2024-07-25 12:10:18.390158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.506 [2024-07-25 12:10:18.390269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:41.506 [2024-07-25 12:10:18.390271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.074 [2024-07-25 12:10:19.185424] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.074 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.074 Malloc1 00:23:42.074 [2024-07-25 12:10:19.291643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.074 Malloc2 00:23:42.074 Malloc3 00:23:42.333 Malloc4 00:23:42.333 Malloc5 00:23:42.333 Malloc6 00:23:42.333 Malloc7 00:23:42.333 Malloc8 00:23:42.592 Malloc9 00:23:42.592 Malloc10 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=11980 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 11980 /var/tmp/bdevperf.sock 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 11980 ']' 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.592 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 [2024-07-25 12:10:19.797458] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:23:42.593 [2024-07-25 12:10:19.797513] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.593 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.593 { 00:23:42.593 "params": { 00:23:42.593 "name": "Nvme$subsystem", 00:23:42.593 "trtype": "$TEST_TRANSPORT", 00:23:42.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.593 "adrfam": "ipv4", 00:23:42.593 "trsvcid": "$NVMF_PORT", 00:23:42.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.593 "hdgst": ${hdgst:-false}, 00:23:42.593 "ddgst": ${ddgst:-false} 00:23:42.593 }, 00:23:42.593 "method": "bdev_nvme_attach_controller" 00:23:42.593 } 00:23:42.593 EOF 00:23:42.593 )") 00:23:42.593 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.594 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:42.594 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:42.594 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:42.594 12:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme1", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme2", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme3", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme4", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme5", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme6", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme7", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme8", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme9", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 },{ 00:23:42.594 "params": { 00:23:42.594 "name": "Nvme10", 00:23:42.594 "trtype": "tcp", 00:23:42.594 "traddr": "10.0.0.2", 00:23:42.594 "adrfam": "ipv4", 00:23:42.594 "trsvcid": "4420", 00:23:42.594 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:42.594 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:42.594 "hdgst": false, 00:23:42.594 "ddgst": false 00:23:42.594 }, 00:23:42.594 "method": "bdev_nvme_attach_controller" 00:23:42.594 }' 00:23:42.594 [2024-07-25 12:10:19.871539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.853 [2024-07-25 12:10:19.959506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 11980 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:44.793 12:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:45.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 11980 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 11667 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.730 { 00:23:45.730 "params": { 00:23:45.730 "name": "Nvme$subsystem", 00:23:45.730 "trtype": "$TEST_TRANSPORT", 00:23:45.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.730 "adrfam": "ipv4", 00:23:45.730 "trsvcid": "$NVMF_PORT", 00:23:45.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.730 "hdgst": ${hdgst:-false}, 00:23:45.730 "ddgst": ${ddgst:-false} 00:23:45.730 }, 00:23:45.730 "method": "bdev_nvme_attach_controller" 00:23:45.730 } 00:23:45.730 EOF 00:23:45.730 )") 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.730 { 00:23:45.730 "params": { 00:23:45.730 "name": "Nvme$subsystem", 00:23:45.730 "trtype": "$TEST_TRANSPORT", 00:23:45.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.730 "adrfam": "ipv4", 00:23:45.730 "trsvcid": "$NVMF_PORT", 00:23:45.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.730 "hdgst": ${hdgst:-false}, 00:23:45.730 "ddgst": ${ddgst:-false} 00:23:45.730 }, 00:23:45.730 "method": "bdev_nvme_attach_controller" 00:23:45.730 } 00:23:45.730 EOF 00:23:45.730 )") 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.730 { 00:23:45.730 "params": { 00:23:45.730 "name": "Nvme$subsystem", 00:23:45.730 "trtype": "$TEST_TRANSPORT", 00:23:45.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.730 "adrfam": "ipv4", 00:23:45.730 "trsvcid": "$NVMF_PORT", 00:23:45.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.730 "hdgst": ${hdgst:-false}, 00:23:45.730 "ddgst": ${ddgst:-false} 00:23:45.730 }, 00:23:45.730 "method": "bdev_nvme_attach_controller" 00:23:45.730 } 00:23:45.730 EOF 00:23:45.730 )") 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.730 { 00:23:45.730 "params": { 00:23:45.730 "name": "Nvme$subsystem", 00:23:45.730 "trtype": "$TEST_TRANSPORT", 00:23:45.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.730 "adrfam": "ipv4", 00:23:45.730 "trsvcid": "$NVMF_PORT", 00:23:45.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.730 "hdgst": ${hdgst:-false}, 00:23:45.730 "ddgst": ${ddgst:-false} 00:23:45.730 }, 00:23:45.730 "method": "bdev_nvme_attach_controller" 00:23:45.730 } 00:23:45.730 EOF 00:23:45.730 )") 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.730 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.730 { 00:23:45.730 "params": { 00:23:45.731 "name": "Nvme$subsystem", 00:23:45.731 "trtype": "$TEST_TRANSPORT", 00:23:45.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "$NVMF_PORT", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.731 "hdgst": ${hdgst:-false}, 00:23:45.731 "ddgst": ${ddgst:-false} 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 } 00:23:45.731 EOF 00:23:45.731 )") 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.731 { 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme$subsystem", 00:23:45.731 "trtype": "$TEST_TRANSPORT", 00:23:45.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "$NVMF_PORT", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.731 "hdgst": ${hdgst:-false}, 00:23:45.731 "ddgst": ${ddgst:-false} 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 } 00:23:45.731 EOF 00:23:45.731 )") 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.731 { 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme$subsystem", 00:23:45.731 "trtype": "$TEST_TRANSPORT", 00:23:45.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "$NVMF_PORT", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.731 "hdgst": ${hdgst:-false}, 00:23:45.731 "ddgst": ${ddgst:-false} 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 } 00:23:45.731 EOF 00:23:45.731 )") 00:23:45.731 [2024-07-25 12:10:22.836267] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:23:45.731 [2024-07-25 12:10:22.836325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12533 ] 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.731 { 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme$subsystem", 00:23:45.731 "trtype": "$TEST_TRANSPORT", 00:23:45.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "$NVMF_PORT", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.731 "hdgst": ${hdgst:-false}, 00:23:45.731 "ddgst": ${ddgst:-false} 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 } 00:23:45.731 EOF 00:23:45.731 )") 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.731 { 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme$subsystem", 00:23:45.731 "trtype": "$TEST_TRANSPORT", 00:23:45.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "$NVMF_PORT", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.731 "hdgst": ${hdgst:-false}, 00:23:45.731 "ddgst": ${ddgst:-false} 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 } 00:23:45.731 EOF 00:23:45.731 )") 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:45.731 { 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme$subsystem", 00:23:45.731 "trtype": "$TEST_TRANSPORT", 00:23:45.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "$NVMF_PORT", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.731 "hdgst": ${hdgst:-false}, 00:23:45.731 "ddgst": ${ddgst:-false} 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 } 00:23:45.731 EOF 00:23:45.731 )") 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:45.731 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:45.731 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme1", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "4420", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.731 "hdgst": false, 00:23:45.731 "ddgst": false 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 },{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme2", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "4420", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:45.731 "hdgst": false, 00:23:45.731 "ddgst": false 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 },{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme3", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "4420", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:45.731 "hdgst": false, 00:23:45.731 "ddgst": false 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 },{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme4", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "4420", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:45.731 "hdgst": false, 00:23:45.731 "ddgst": false 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 },{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme5", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "4420", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:45.731 "hdgst": false, 00:23:45.731 "ddgst": false 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 },{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme6", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "4420", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:45.731 "hdgst": false, 00:23:45.731 "ddgst": false 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 },{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme7", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.731 "adrfam": "ipv4", 00:23:45.731 "trsvcid": "4420", 00:23:45.731 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:45.731 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:45.731 "hdgst": false, 00:23:45.731 "ddgst": false 00:23:45.731 }, 00:23:45.731 "method": "bdev_nvme_attach_controller" 00:23:45.731 },{ 00:23:45.731 "params": { 00:23:45.731 "name": "Nvme8", 00:23:45.731 "trtype": "tcp", 00:23:45.731 "traddr": "10.0.0.2", 00:23:45.732 "adrfam": "ipv4", 00:23:45.732 "trsvcid": "4420", 00:23:45.732 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:45.732 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:45.732 "hdgst": false, 00:23:45.732 "ddgst": false 00:23:45.732 }, 00:23:45.732 "method": "bdev_nvme_attach_controller" 00:23:45.732 },{ 00:23:45.732 "params": { 00:23:45.732 "name": "Nvme9", 00:23:45.732 "trtype": "tcp", 00:23:45.732 "traddr": "10.0.0.2", 00:23:45.732 "adrfam": "ipv4", 00:23:45.732 "trsvcid": "4420", 00:23:45.732 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:45.732 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:45.732 "hdgst": false, 00:23:45.732 "ddgst": false 00:23:45.732 }, 00:23:45.732 "method": "bdev_nvme_attach_controller" 00:23:45.732 },{ 00:23:45.732 "params": { 00:23:45.732 "name": "Nvme10", 00:23:45.732 "trtype": "tcp", 00:23:45.732 "traddr": "10.0.0.2", 00:23:45.732 "adrfam": "ipv4", 00:23:45.732 "trsvcid": "4420", 00:23:45.732 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:45.732 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:45.732 "hdgst": false, 00:23:45.732 "ddgst": false 00:23:45.732 }, 00:23:45.732 "method": "bdev_nvme_attach_controller" 00:23:45.732 }' 00:23:45.732 [2024-07-25 12:10:22.908880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.732 [2024-07-25 12:10:23.001061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.633 Running I/O for 1 seconds... 00:23:48.569 00:23:48.569 Latency(us) 00:23:48.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.569 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme1n1 : 1.10 175.10 10.94 0.00 0.00 361107.08 31933.91 310759.80 00:23:48.569 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme2n1 : 1.10 174.45 10.90 0.00 0.00 354168.71 51952.17 310759.80 00:23:48.569 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme3n1 : 1.05 182.30 11.39 0.00 0.00 330565.82 33363.78 308853.29 00:23:48.569 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme4n1 : 1.10 237.35 14.83 0.00 0.00 246685.74 11498.59 293601.28 00:23:48.569 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme5n1 : 1.14 167.81 10.49 0.00 0.00 345251.84 53858.68 324105.31 00:23:48.569 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme6n1 : 1.22 209.31 13.08 0.00 0.00 272705.63 17158.52 284068.77 00:23:48.569 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme7n1 : 1.21 213.48 13.34 0.00 0.00 260786.50 1489.45 282162.27 00:23:48.569 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme8n1 : 1.17 218.10 13.63 0.00 0.00 248741.70 23354.65 306946.79 00:23:48.569 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme9n1 : 1.24 210.22 13.14 0.00 0.00 254225.72 1772.45 318385.80 00:23:48.569 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.569 Verification LBA range: start 0x0 length 0x400 00:23:48.569 Nvme10n1 : 1.25 204.12 12.76 0.00 0.00 257047.16 9830.40 343170.33 00:23:48.569 =================================================================================================================== 00:23:48.569 Total : 1992.23 124.51 0.00 0.00 286861.71 1489.45 343170.33 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.828 rmmod nvme_tcp 00:23:48.828 rmmod nvme_fabrics 00:23:48.828 rmmod nvme_keyring 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 11667 ']' 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 11667 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 11667 ']' 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 11667 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.828 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 11667 00:23:48.828 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:48.828 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:48.828 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 11667' 00:23:48.828 killing process with pid 11667 00:23:48.828 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 11667 00:23:48.828 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 11667 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.395 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.299 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.299 00:23:51.299 real 0m16.377s 00:23:51.299 user 0m38.562s 00:23:51.299 sys 0m5.987s 00:23:51.299 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.299 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.299 ************************************ 00:23:51.299 END TEST nvmf_shutdown_tc1 00:23:51.299 ************************************ 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.559 ************************************ 00:23:51.559 START TEST nvmf_shutdown_tc2 00:23:51.559 ************************************ 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:51.559 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:51.559 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.559 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:51.560 Found net devices under 0000:af:00.0: cvl_0_0 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:51.560 Found net devices under 0000:af:00.1: cvl_0_1 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.560 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.818 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.818 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.818 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:23:51.818 00:23:51.818 --- 10.0.0.2 ping statistics --- 00:23:51.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.818 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:23:51.818 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:23:51.818 00:23:51.818 --- 10.0.0.1 ping statistics --- 00:23:51.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.819 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=13684 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 13684 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 13684 ']' 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.819 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.819 [2024-07-25 12:10:29.032750] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:23:51.819 [2024-07-25 12:10:29.032803] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.819 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.077 [2024-07-25 12:10:29.120220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.077 [2024-07-25 12:10:29.229159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.077 [2024-07-25 12:10:29.229205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.077 [2024-07-25 12:10:29.229218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.077 [2024-07-25 12:10:29.229228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.077 [2024-07-25 12:10:29.229237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.077 [2024-07-25 12:10:29.229356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.077 [2024-07-25 12:10:29.229468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.077 [2024-07-25 12:10:29.229580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:52.077 [2024-07-25 12:10:29.229581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.014 12:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.014 12:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:53.014 12:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.014 12:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.014 12:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.014 [2024-07-25 12:10:30.022722] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.014 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.014 Malloc1 00:23:53.014 [2024-07-25 12:10:30.129644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.014 Malloc2 00:23:53.014 Malloc3 00:23:53.014 Malloc4 00:23:53.014 Malloc5 00:23:53.272 Malloc6 00:23:53.272 Malloc7 00:23:53.272 Malloc8 00:23:53.273 Malloc9 00:23:53.273 Malloc10 00:23:53.273 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.273 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:53.273 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.273 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=14002 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 14002 /var/tmp/bdevperf.sock 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 14002 ']' 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.532 { 00:23:53.532 "params": { 00:23:53.532 "name": "Nvme$subsystem", 00:23:53.532 "trtype": "$TEST_TRANSPORT", 00:23:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.532 "adrfam": "ipv4", 00:23:53.532 "trsvcid": "$NVMF_PORT", 00:23:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.532 "hdgst": ${hdgst:-false}, 00:23:53.532 "ddgst": ${ddgst:-false} 00:23:53.532 }, 00:23:53.532 "method": "bdev_nvme_attach_controller" 00:23:53.532 } 00:23:53.532 EOF 00:23:53.532 )") 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.532 { 00:23:53.532 "params": { 00:23:53.532 "name": "Nvme$subsystem", 00:23:53.532 "trtype": "$TEST_TRANSPORT", 00:23:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.532 "adrfam": "ipv4", 00:23:53.532 "trsvcid": "$NVMF_PORT", 00:23:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.532 "hdgst": ${hdgst:-false}, 00:23:53.532 "ddgst": ${ddgst:-false} 00:23:53.532 }, 00:23:53.532 "method": "bdev_nvme_attach_controller" 00:23:53.532 } 00:23:53.532 EOF 00:23:53.532 )") 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.532 { 00:23:53.532 "params": { 00:23:53.532 "name": "Nvme$subsystem", 00:23:53.532 "trtype": "$TEST_TRANSPORT", 00:23:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.532 "adrfam": "ipv4", 00:23:53.532 "trsvcid": "$NVMF_PORT", 00:23:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.532 "hdgst": ${hdgst:-false}, 00:23:53.532 "ddgst": ${ddgst:-false} 00:23:53.532 }, 00:23:53.532 "method": "bdev_nvme_attach_controller" 00:23:53.532 } 00:23:53.532 EOF 00:23:53.532 )") 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.532 { 00:23:53.532 "params": { 00:23:53.532 "name": "Nvme$subsystem", 00:23:53.532 "trtype": "$TEST_TRANSPORT", 00:23:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.532 "adrfam": "ipv4", 00:23:53.532 "trsvcid": "$NVMF_PORT", 00:23:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.532 "hdgst": ${hdgst:-false}, 00:23:53.532 "ddgst": ${ddgst:-false} 00:23:53.532 }, 00:23:53.532 "method": "bdev_nvme_attach_controller" 00:23:53.532 } 00:23:53.532 EOF 00:23:53.532 )") 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.532 { 00:23:53.532 "params": { 00:23:53.532 "name": "Nvme$subsystem", 00:23:53.532 "trtype": "$TEST_TRANSPORT", 00:23:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.532 "adrfam": "ipv4", 00:23:53.532 "trsvcid": "$NVMF_PORT", 00:23:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.532 "hdgst": ${hdgst:-false}, 00:23:53.532 "ddgst": ${ddgst:-false} 00:23:53.532 }, 00:23:53.532 "method": "bdev_nvme_attach_controller" 00:23:53.532 } 00:23:53.532 EOF 00:23:53.532 )") 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.532 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.532 { 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme$subsystem", 00:23:53.533 "trtype": "$TEST_TRANSPORT", 00:23:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "$NVMF_PORT", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.533 "hdgst": ${hdgst:-false}, 00:23:53.533 "ddgst": ${ddgst:-false} 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 } 00:23:53.533 EOF 00:23:53.533 )") 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.533 { 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme$subsystem", 00:23:53.533 "trtype": "$TEST_TRANSPORT", 00:23:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "$NVMF_PORT", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.533 "hdgst": ${hdgst:-false}, 00:23:53.533 "ddgst": ${ddgst:-false} 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 } 00:23:53.533 EOF 00:23:53.533 )") 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.533 [2024-07-25 12:10:30.648027] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:23:53.533 [2024-07-25 12:10:30.648087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14002 ] 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.533 { 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme$subsystem", 00:23:53.533 "trtype": "$TEST_TRANSPORT", 00:23:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "$NVMF_PORT", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.533 "hdgst": ${hdgst:-false}, 00:23:53.533 "ddgst": ${ddgst:-false} 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 } 00:23:53.533 EOF 00:23:53.533 )") 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.533 { 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme$subsystem", 00:23:53.533 "trtype": "$TEST_TRANSPORT", 00:23:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "$NVMF_PORT", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.533 "hdgst": ${hdgst:-false}, 00:23:53.533 "ddgst": ${ddgst:-false} 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 } 00:23:53.533 EOF 00:23:53.533 )") 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.533 { 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme$subsystem", 00:23:53.533 "trtype": "$TEST_TRANSPORT", 00:23:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "$NVMF_PORT", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.533 "hdgst": ${hdgst:-false}, 00:23:53.533 "ddgst": ${ddgst:-false} 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 } 00:23:53.533 EOF 00:23:53.533 )") 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:53.533 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme1", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme2", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme3", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme4", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme5", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme6", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme7", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme8", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme9", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.533 "method": "bdev_nvme_attach_controller" 00:23:53.533 },{ 00:23:53.533 "params": { 00:23:53.533 "name": "Nvme10", 00:23:53.533 "trtype": "tcp", 00:23:53.533 "traddr": "10.0.0.2", 00:23:53.533 "adrfam": "ipv4", 00:23:53.533 "trsvcid": "4420", 00:23:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:53.533 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:53.533 "hdgst": false, 00:23:53.533 "ddgst": false 00:23:53.533 }, 00:23:53.534 "method": "bdev_nvme_attach_controller" 00:23:53.534 }' 00:23:53.534 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.534 [2024-07-25 12:10:30.729680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.534 [2024-07-25 12:10:30.815669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.435 Running I/O for 10 seconds... 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:55.435 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:55.694 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:55.694 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:55.694 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:55.694 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:55.694 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.694 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.952 12:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.952 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:55.952 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:55.952 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 14002 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 14002 ']' 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 14002 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 14002 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 14002' 00:23:56.212 killing process with pid 14002 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 14002 00:23:56.212 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 14002 00:23:56.471 Received shutdown signal, test time was about 1.245553 seconds 00:23:56.471 00:23:56.471 Latency(us) 00:23:56.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.471 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme1n1 : 1.23 208.03 13.00 0.00 0.00 303484.74 17754.30 312666.30 00:23:56.471 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme2n1 : 1.24 207.10 12.94 0.00 0.00 299571.67 17635.14 301227.29 00:23:56.471 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme3n1 : 1.22 209.97 13.12 0.00 0.00 288941.38 20852.36 308853.29 00:23:56.471 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme4n1 : 1.18 216.41 13.53 0.00 0.00 274566.52 22043.93 299320.79 00:23:56.471 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme5n1 : 1.20 160.48 10.03 0.00 0.00 362718.18 77213.32 295507.78 00:23:56.471 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme6n1 : 1.20 160.03 10.00 0.00 0.00 355981.50 30980.65 348889.83 00:23:56.471 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme7n1 : 1.24 205.70 12.86 0.00 0.00 272203.40 15252.01 285975.27 00:23:56.471 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.471 Nvme8n1 : 1.21 211.62 13.23 0.00 0.00 256788.95 26810.18 305040.29 00:23:56.471 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.471 Verification LBA range: start 0x0 length 0x400 00:23:56.472 Nvme9n1 : 1.20 159.70 9.98 0.00 0.00 333108.44 32172.22 320292.31 00:23:56.472 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.472 Verification LBA range: start 0x0 length 0x400 00:23:56.472 Nvme10n1 : 1.23 156.16 9.76 0.00 0.00 333355.44 41228.10 339357.32 00:23:56.472 =================================================================================================================== 00:23:56.472 Total : 1895.19 118.45 0.00 0.00 303825.48 15252.01 348889.83 00:23:56.472 12:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 13684 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.848 rmmod nvme_tcp 00:23:57.848 rmmod nvme_fabrics 00:23:57.848 rmmod nvme_keyring 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 13684 ']' 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 13684 00:23:57.848 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 13684 ']' 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 13684 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 13684 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 13684' 00:23:57.849 killing process with pid 13684 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 13684 00:23:57.849 12:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 13684 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.421 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.367 00:24:00.367 real 0m8.915s 00:24:00.367 user 0m27.984s 00:24:00.367 sys 0m1.601s 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.367 ************************************ 00:24:00.367 END TEST nvmf_shutdown_tc2 00:24:00.367 ************************************ 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:00.367 ************************************ 00:24:00.367 START TEST nvmf_shutdown_tc3 00:24:00.367 ************************************ 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.367 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:00.368 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:00.368 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:00.368 Found net devices under 0000:af:00.0: cvl_0_0 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:00.368 Found net devices under 0000:af:00.1: cvl_0_1 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.368 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:24:00.627 00:24:00.627 --- 10.0.0.2 ping statistics --- 00:24:00.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.627 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:00.627 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:24:00.887 00:24:00.887 --- 10.0.0.1 ping statistics --- 00:24:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.887 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=15434 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 15434 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:00.887 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 15434 ']' 00:24:00.888 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.888 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.888 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.888 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.888 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.888 [2024-07-25 12:10:38.043029] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:00.888 [2024-07-25 12:10:38.043083] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.888 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.888 [2024-07-25 12:10:38.129934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.147 [2024-07-25 12:10:38.238305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.147 [2024-07-25 12:10:38.238360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.147 [2024-07-25 12:10:38.238380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.147 [2024-07-25 12:10:38.238395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.147 [2024-07-25 12:10:38.238410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.147 [2024-07-25 12:10:38.238947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.147 [2024-07-25 12:10:38.239049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.147 [2024-07-25 12:10:38.239160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:01.147 [2024-07-25 12:10:38.239162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.713 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:01.713 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:01.713 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.713 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.713 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:01.973 [2024-07-25 12:10:39.023333] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.973 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:01.973 Malloc1 00:24:01.973 [2024-07-25 12:10:39.131351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.973 Malloc2 00:24:01.973 Malloc3 00:24:01.973 Malloc4 00:24:02.233 Malloc5 00:24:02.233 Malloc6 00:24:02.233 Malloc7 00:24:02.233 Malloc8 00:24:02.233 Malloc9 00:24:02.494 Malloc10 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=15745 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 15745 /var/tmp/bdevperf.sock 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 15745 ']' 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.494 { 00:24:02.494 "params": { 00:24:02.494 "name": "Nvme$subsystem", 00:24:02.494 "trtype": "$TEST_TRANSPORT", 00:24:02.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.494 "adrfam": "ipv4", 00:24:02.494 "trsvcid": "$NVMF_PORT", 00:24:02.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.494 "hdgst": ${hdgst:-false}, 00:24:02.494 "ddgst": ${ddgst:-false} 00:24:02.494 }, 00:24:02.494 "method": "bdev_nvme_attach_controller" 00:24:02.494 } 00:24:02.494 EOF 00:24:02.494 )") 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.494 { 00:24:02.494 "params": { 00:24:02.494 "name": "Nvme$subsystem", 00:24:02.494 "trtype": "$TEST_TRANSPORT", 00:24:02.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.494 "adrfam": "ipv4", 00:24:02.494 "trsvcid": "$NVMF_PORT", 00:24:02.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.494 "hdgst": ${hdgst:-false}, 00:24:02.494 "ddgst": ${ddgst:-false} 00:24:02.494 }, 00:24:02.494 "method": "bdev_nvme_attach_controller" 00:24:02.494 } 00:24:02.494 EOF 00:24:02.494 )") 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.494 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.494 { 00:24:02.494 "params": { 00:24:02.494 "name": "Nvme$subsystem", 00:24:02.494 "trtype": "$TEST_TRANSPORT", 00:24:02.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.494 "adrfam": "ipv4", 00:24:02.494 "trsvcid": "$NVMF_PORT", 00:24:02.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.495 { 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme$subsystem", 00:24:02.495 "trtype": "$TEST_TRANSPORT", 00:24:02.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "$NVMF_PORT", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.495 { 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme$subsystem", 00:24:02.495 "trtype": "$TEST_TRANSPORT", 00:24:02.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "$NVMF_PORT", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.495 { 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme$subsystem", 00:24:02.495 "trtype": "$TEST_TRANSPORT", 00:24:02.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "$NVMF_PORT", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.495 { 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme$subsystem", 00:24:02.495 "trtype": "$TEST_TRANSPORT", 00:24:02.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "$NVMF_PORT", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.495 { 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme$subsystem", 00:24:02.495 "trtype": "$TEST_TRANSPORT", 00:24:02.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "$NVMF_PORT", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 [2024-07-25 12:10:39.692099] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:02.495 [2024-07-25 12:10:39.692169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid15745 ] 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.495 { 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme$subsystem", 00:24:02.495 "trtype": "$TEST_TRANSPORT", 00:24:02.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "$NVMF_PORT", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.495 { 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme$subsystem", 00:24:02.495 "trtype": "$TEST_TRANSPORT", 00:24:02.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "$NVMF_PORT", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.495 "hdgst": ${hdgst:-false}, 00:24:02.495 "ddgst": ${ddgst:-false} 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 } 00:24:02.495 EOF 00:24:02.495 )") 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:02.495 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme1", 00:24:02.495 "trtype": "tcp", 00:24:02.495 "traddr": "10.0.0.2", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "4420", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.495 "hdgst": false, 00:24:02.495 "ddgst": false 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 },{ 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme2", 00:24:02.495 "trtype": "tcp", 00:24:02.495 "traddr": "10.0.0.2", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "4420", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:02.495 "hdgst": false, 00:24:02.495 "ddgst": false 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 },{ 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme3", 00:24:02.495 "trtype": "tcp", 00:24:02.495 "traddr": "10.0.0.2", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "4420", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:02.495 "hdgst": false, 00:24:02.495 "ddgst": false 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 },{ 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme4", 00:24:02.495 "trtype": "tcp", 00:24:02.495 "traddr": "10.0.0.2", 00:24:02.495 "adrfam": "ipv4", 00:24:02.495 "trsvcid": "4420", 00:24:02.495 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:02.495 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:02.495 "hdgst": false, 00:24:02.495 "ddgst": false 00:24:02.495 }, 00:24:02.495 "method": "bdev_nvme_attach_controller" 00:24:02.495 },{ 00:24:02.495 "params": { 00:24:02.495 "name": "Nvme5", 00:24:02.496 "trtype": "tcp", 00:24:02.496 "traddr": "10.0.0.2", 00:24:02.496 "adrfam": "ipv4", 00:24:02.496 "trsvcid": "4420", 00:24:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:02.496 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:02.496 "hdgst": false, 00:24:02.496 "ddgst": false 00:24:02.496 }, 00:24:02.496 "method": "bdev_nvme_attach_controller" 00:24:02.496 },{ 00:24:02.496 "params": { 00:24:02.496 "name": "Nvme6", 00:24:02.496 "trtype": "tcp", 00:24:02.496 "traddr": "10.0.0.2", 00:24:02.496 "adrfam": "ipv4", 00:24:02.496 "trsvcid": "4420", 00:24:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:02.496 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:02.496 "hdgst": false, 00:24:02.496 "ddgst": false 00:24:02.496 }, 00:24:02.496 "method": "bdev_nvme_attach_controller" 00:24:02.496 },{ 00:24:02.496 "params": { 00:24:02.496 "name": "Nvme7", 00:24:02.496 "trtype": "tcp", 00:24:02.496 "traddr": "10.0.0.2", 00:24:02.496 "adrfam": "ipv4", 00:24:02.496 "trsvcid": "4420", 00:24:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:02.496 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:02.496 "hdgst": false, 00:24:02.496 "ddgst": false 00:24:02.496 }, 00:24:02.496 "method": "bdev_nvme_attach_controller" 00:24:02.496 },{ 00:24:02.496 "params": { 00:24:02.496 "name": "Nvme8", 00:24:02.496 "trtype": "tcp", 00:24:02.496 "traddr": "10.0.0.2", 00:24:02.496 "adrfam": "ipv4", 00:24:02.496 "trsvcid": "4420", 00:24:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:02.496 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:02.496 "hdgst": false, 00:24:02.496 "ddgst": false 00:24:02.496 }, 00:24:02.496 "method": "bdev_nvme_attach_controller" 00:24:02.496 },{ 00:24:02.496 "params": { 00:24:02.496 "name": "Nvme9", 00:24:02.496 "trtype": "tcp", 00:24:02.496 "traddr": "10.0.0.2", 00:24:02.496 "adrfam": "ipv4", 00:24:02.496 "trsvcid": "4420", 00:24:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:02.496 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:02.496 "hdgst": false, 00:24:02.496 "ddgst": false 00:24:02.496 }, 00:24:02.496 "method": "bdev_nvme_attach_controller" 00:24:02.496 },{ 00:24:02.496 "params": { 00:24:02.496 "name": "Nvme10", 00:24:02.496 "trtype": "tcp", 00:24:02.496 "traddr": "10.0.0.2", 00:24:02.496 "adrfam": "ipv4", 00:24:02.496 "trsvcid": "4420", 00:24:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:02.496 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:02.496 "hdgst": false, 00:24:02.496 "ddgst": false 00:24:02.496 }, 00:24:02.496 "method": "bdev_nvme_attach_controller" 00:24:02.496 }' 00:24:02.496 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.496 [2024-07-25 12:10:39.775965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.755 [2024-07-25 12:10:39.863492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.657 Running I/O for 10 seconds... 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:04.657 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.915 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:04.915 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:04.915 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:05.173 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 15434 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 15434 ']' 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 15434 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 15434 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 15434' 00:24:05.440 killing process with pid 15434 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 15434 00:24:05.440 12:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 15434 00:24:05.440 [2024-07-25 12:10:42.623756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.623992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.440 [2024-07-25 12:10:42.624312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.624984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.625002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd30 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.639823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1391ae0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.641987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.441 [2024-07-25 12:10:42.642374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.642692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c1f0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.645985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.442 [2024-07-25 12:10:42.646590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.646828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6b0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.648910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.648951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.648966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.648977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.648989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.649000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.649011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.649023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.649035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.649046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.649057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.649068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb90 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.652997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.443 [2024-07-25 12:10:42.653225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.653396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d9d0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.444 [2024-07-25 12:10:42.655978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.655990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.656001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140deb0 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.656965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1391620 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.669638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24524c0 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.669804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2558390 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.669921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.669983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.669992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2550ae0 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.670036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e610 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.670156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9b20 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.670268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a8a0 is same with the state(5) to be set 00:24:05.445 [2024-07-25 12:10:42.670381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.445 [2024-07-25 12:10:42.670404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.445 [2024-07-25 12:10:42.670413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d120 is same with the state(5) to be set 00:24:05.446 [2024-07-25 12:10:42.670496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2557840 is same with the state(5) to be set 00:24:05.446 [2024-07-25 12:10:42.670620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23afb50 is same with the state(5) to be set 00:24:05.446 [2024-07-25 12:10:42.670732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.446 [2024-07-25 12:10:42.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.670817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2558dc0 is same with the state(5) to be set 00:24:05.446 [2024-07-25 12:10:42.671738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.671975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.671986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.446 [2024-07-25 12:10:42.672256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.446 [2024-07-25 12:10:42.672267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.672990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.673011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.673033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.673054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.673076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.673098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.673121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.447 [2024-07-25 12:10:42.673143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.447 [2024-07-25 12:10:42.673155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.673165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.673177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.673187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.673223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.448 [2024-07-25 12:10:42.673287] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23873f0 was disconnected and freed. reset controller. 00:24:05.448 [2024-07-25 12:10:42.674788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.674978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.674994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.448 [2024-07-25 12:10:42.675617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.448 [2024-07-25 12:10:42.675628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.675985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.675998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.676229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.676241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246e430 is same with the state(5) to be set 00:24:05.449 [2024-07-25 12:10:42.677126] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x246e430 was disconnected and freed. reset controller. 00:24:05.449 [2024-07-25 12:10:42.677436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.677458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.677475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.677485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.677498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.677508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.677519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.677529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.677542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.677551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.677564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.677577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.449 [2024-07-25 12:10:42.677590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.449 [2024-07-25 12:10:42.677609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.677986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.677998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.450 [2024-07-25 12:10:42.678438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.450 [2024-07-25 12:10:42.678448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.678460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.678469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.678481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.687528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.687646] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23c99f0 was disconnected and freed. reset controller. 00:24:05.451 [2024-07-25 12:10:42.689210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.451 [2024-07-25 12:10:42.689692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.451 [2024-07-25 12:10:42.689702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.689983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.689994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.452 [2024-07-25 12:10:42.690637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.452 [2024-07-25 12:10:42.690648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.690661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.690671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.690684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.690695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.690708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.690718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.690732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.690742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.690756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.690767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.690840] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23888a0 was disconnected and freed. reset controller. 00:24:05.453 [2024-07-25 12:10:42.691000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.453 [2024-07-25 12:10:42.691799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.453 [2024-07-25 12:10:42.691812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.691836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.691859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.691883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.691906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.691930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.691954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.691984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.691997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.454 [2024-07-25 12:10:42.692558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.454 [2024-07-25 12:10:42.692641] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x246cf60 was disconnected and freed. reset controller. 00:24:05.454 [2024-07-25 12:10:42.694162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:05.454 [2024-07-25 12:10:42.694214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245a8a0 (9): Bad file descriptor 00:24:05.454 [2024-07-25 12:10:42.694240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24524c0 (9): Bad file descriptor 00:24:05.454 [2024-07-25 12:10:42.694271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2558390 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.694301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2550ae0 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.694322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e610 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.694342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9b20 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.694368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238d120 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.694392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2557840 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.694413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23afb50 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.694432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2558dc0 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.699942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:05.455 [2024-07-25 12:10:42.699990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:05.455 [2024-07-25 12:10:42.701164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:05.455 [2024-07-25 12:10:42.701198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:05.455 [2024-07-25 12:10:42.701488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.455 [2024-07-25 12:10:42.701513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245a8a0 with addr=10.0.0.2, port=4420 00:24:05.455 [2024-07-25 12:10:42.701527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a8a0 is same with the state(5) to be set 00:24:05.455 [2024-07-25 12:10:42.701749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.455 [2024-07-25 12:10:42.701767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23afb50 with addr=10.0.0.2, port=4420 00:24:05.455 [2024-07-25 12:10:42.701779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23afb50 is same with the state(5) to be set 00:24:05.455 [2024-07-25 12:10:42.702079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.455 [2024-07-25 12:10:42.702096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8e610 with addr=10.0.0.2, port=4420 00:24:05.455 [2024-07-25 12:10:42.702108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e610 is same with the state(5) to be set 00:24:05.455 [2024-07-25 12:10:42.702168] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.455 [2024-07-25 12:10:42.702285] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.455 [2024-07-25 12:10:42.703117] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.455 [2024-07-25 12:10:42.703179] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:05.455 [2024-07-25 12:10:42.703768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.455 [2024-07-25 12:10:42.703801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2557840 with addr=10.0.0.2, port=4420 00:24:05.455 [2024-07-25 12:10:42.703817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2557840 is same with the state(5) to be set 00:24:05.455 [2024-07-25 12:10:42.703976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.455 [2024-07-25 12:10:42.703991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2550ae0 with addr=10.0.0.2, port=4420 00:24:05.455 [2024-07-25 12:10:42.704000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2550ae0 is same with the state(5) to be set 00:24:05.455 [2024-07-25 12:10:42.704015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245a8a0 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.704029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23afb50 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.704041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e610 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.704160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2557840 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.704177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2550ae0 (9): Bad file descriptor 00:24:05.455 [2024-07-25 12:10:42.704188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:05.455 [2024-07-25 12:10:42.704198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:05.455 [2024-07-25 12:10:42.704209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:05.455 [2024-07-25 12:10:42.704226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:05.455 [2024-07-25 12:10:42.704235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:05.455 [2024-07-25 12:10:42.704244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:05.455 [2024-07-25 12:10:42.704257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:05.455 [2024-07-25 12:10:42.704266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:05.455 [2024-07-25 12:10:42.704275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:05.455 [2024-07-25 12:10:42.704335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.455 [2024-07-25 12:10:42.704773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.455 [2024-07-25 12:10:42.704783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.704980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.704991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.456 [2024-07-25 12:10:42.705638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.456 [2024-07-25 12:10:42.705650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.705659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.705671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.705681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.705693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.705702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.705715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.705724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.705736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.705746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.705756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2a60 is same with the state(5) to be set 00:24:05.457 [2024-07-25 12:10:42.705817] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24f2a60 was disconnected and freed. reset controller. 00:24:05.457 [2024-07-25 12:10:42.705875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.457 [2024-07-25 12:10:42.705885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.457 [2024-07-25 12:10:42.705893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.457 [2024-07-25 12:10:42.705905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:05.457 [2024-07-25 12:10:42.705914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:05.457 [2024-07-25 12:10:42.705924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:05.457 [2024-07-25 12:10:42.705941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:05.457 [2024-07-25 12:10:42.705949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:05.457 [2024-07-25 12:10:42.705958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:05.457 [2024-07-25 12:10:42.706011] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.457 [2024-07-25 12:10:42.707410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.457 [2024-07-25 12:10:42.707423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.457 [2024-07-25 12:10:42.707448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:05.457 [2024-07-25 12:10:42.707512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.707985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.707995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.708007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.708019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.708031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.708040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.708052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.708061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.708074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.708083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.708095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.708105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.708117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.457 [2024-07-25 12:10:42.708127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.457 [2024-07-25 12:10:42.708139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.708908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.708918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6e70 is same with the state(5) to be set 00:24:05.458 [2024-07-25 12:10:42.710399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.458 [2024-07-25 12:10:42.710417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.458 [2024-07-25 12:10:42.710432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.710983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.710993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-07-25 12:10:42.711199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.459 [2024-07-25 12:10:42.711209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.711802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.711813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c85d0 is same with the state(5) to be set 00:24:05.460 [2024-07-25 12:10:42.713280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-07-25 12:10:42.713543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.460 [2024-07-25 12:10:42.713555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.713987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.713999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.461 [2024-07-25 12:10:42.714413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-07-25 12:10:42.714423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.714688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.714699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2cba230 is same with the state(5) to be set 00:24:05.462 [2024-07-25 12:10:42.716163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.462 [2024-07-25 12:10:42.716751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-07-25 12:10:42.716760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.716981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.716993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.463 [2024-07-25 12:10:42.717497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.463 [2024-07-25 12:10:42.717506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.464 [2024-07-25 12:10:42.717518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.464 [2024-07-25 12:10:42.717528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.464 [2024-07-25 12:10:42.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.464 [2024-07-25 12:10:42.717549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.464 [2024-07-25 12:10:42.717561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.464 [2024-07-25 12:10:42.717571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.464 [2024-07-25 12:10:42.717582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2e61cb0 is same with the state(5) to be set 00:24:05.464 [2024-07-25 12:10:42.719695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.464 [2024-07-25 12:10:42.719723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:05.464 [2024-07-25 12:10:42.719736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:05.723 task offset: 19328 on job bdev=Nvme5n1 fails 00:24:05.723 00:24:05.723 Latency(us) 00:24:05.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.723 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.723 Job: Nvme1n1 ended in about 1.09 seconds with error 00:24:05.723 Verification LBA range: start 0x0 length 0x400 00:24:05.723 Nvme1n1 : 1.09 117.85 7.37 58.93 0.00 356896.58 29789.09 356515.84 00:24:05.723 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.723 Job: Nvme2n1 ended in about 1.08 seconds with error 00:24:05.723 Verification LBA range: start 0x0 length 0x400 00:24:05.723 Nvme2n1 : 1.08 118.17 7.39 59.08 0.00 346517.88 31695.59 356515.84 00:24:05.723 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.723 Job: Nvme3n1 ended in about 1.09 seconds with error 00:24:05.723 Verification LBA range: start 0x0 length 0x400 00:24:05.723 Nvme3n1 : 1.09 117.54 7.35 58.77 0.00 338908.63 30980.65 305040.29 00:24:05.723 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.723 Job: Nvme4n1 ended in about 1.07 seconds with error 00:24:05.723 Verification LBA range: start 0x0 length 0x400 00:24:05.723 Nvme4n1 : 1.07 179.11 11.19 59.70 0.00 242679.85 23354.65 291694.78 00:24:05.723 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.724 Job: Nvme5n1 ended in about 1.06 seconds with error 00:24:05.724 Verification LBA range: start 0x0 length 0x400 00:24:05.724 Nvme5n1 : 1.06 120.20 7.51 60.10 0.00 311868.51 16324.42 314572.80 00:24:05.724 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.724 Job: Nvme6n1 ended in about 1.07 seconds with error 00:24:05.724 Verification LBA range: start 0x0 length 0x400 00:24:05.724 Nvme6n1 : 1.07 119.21 7.45 59.60 0.00 305335.39 23116.33 289788.28 00:24:05.724 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.724 Job: Nvme7n1 ended in about 1.09 seconds with error 00:24:05.724 Verification LBA range: start 0x0 length 0x400 00:24:05.724 Nvme7n1 : 1.09 117.23 7.33 58.62 0.00 301846.65 32648.84 265003.75 00:24:05.724 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.724 Job: Nvme8n1 ended in about 1.09 seconds with error 00:24:05.724 Verification LBA range: start 0x0 length 0x400 00:24:05.724 Nvme8n1 : 1.09 116.92 7.31 58.46 0.00 293218.99 33602.09 314572.80 00:24:05.724 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.724 Job: Nvme9n1 ended in about 1.08 seconds with error 00:24:05.724 Verification LBA range: start 0x0 length 0x400 00:24:05.724 Nvme9n1 : 1.08 119.01 7.44 59.51 0.00 277345.75 23831.27 318385.80 00:24:05.724 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:05.724 Job: Nvme10n1 ended in about 1.07 seconds with error 00:24:05.724 Verification LBA range: start 0x0 length 0x400 00:24:05.724 Nvme10n1 : 1.07 119.65 7.48 59.82 0.00 266119.91 23473.80 341263.83 00:24:05.724 =================================================================================================================== 00:24:05.724 Total : 1244.91 77.81 592.60 0.00 302093.36 16324.42 356515.84 00:24:05.724 [2024-07-25 12:10:42.751128] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:05.724 [2024-07-25 12:10:42.751169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:05.724 [2024-07-25 12:10:42.751572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.751595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2558dc0 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.751615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2558dc0 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.751696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2558dc0 (9): Bad file descriptor 00:24:05.724 [2024-07-25 12:10:42.752740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.752768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238d120 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.752780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d120 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.753008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.753023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9b20 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.753033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9b20 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.753288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.753303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24524c0 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.753313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24524c0 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.753507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.753521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2558390 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.753530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2558390 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.753560] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.724 [2024-07-25 12:10:42.753577] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.724 [2024-07-25 12:10:42.753591] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.724 [2024-07-25 12:10:42.753609] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.724 [2024-07-25 12:10:42.753623] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.724 [2024-07-25 12:10:42.753636] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:05.724 [2024-07-25 12:10:42.754934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:05.724 [2024-07-25 12:10:42.754953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:05.724 [2024-07-25 12:10:42.754964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:05.724 [2024-07-25 12:10:42.754976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:05.724 [2024-07-25 12:10:42.754987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:05.724 [2024-07-25 12:10:42.755055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238d120 (9): Bad file descriptor 00:24:05.724 [2024-07-25 12:10:42.755070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9b20 (9): Bad file descriptor 00:24:05.724 [2024-07-25 12:10:42.755083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24524c0 (9): Bad file descriptor 00:24:05.724 [2024-07-25 12:10:42.755095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2558390 (9): Bad file descriptor 00:24:05.724 [2024-07-25 12:10:42.755105] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:05.724 [2024-07-25 12:10:42.755114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:05.724 [2024-07-25 12:10:42.755124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:05.724 [2024-07-25 12:10:42.755212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.724 [2024-07-25 12:10:42.755531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.755548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8e610 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.755562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e610 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.755757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.755772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23afb50 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.755781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23afb50 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.756004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.756019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245a8a0 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.756029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245a8a0 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.756175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.756188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2550ae0 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.756197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2550ae0 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.756387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.724 [2024-07-25 12:10:42.756402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2557840 with addr=10.0.0.2, port=4420 00:24:05.724 [2024-07-25 12:10:42.756411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2557840 is same with the state(5) to be set 00:24:05.724 [2024-07-25 12:10:42.756421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:05.724 [2024-07-25 12:10:42.756429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:05.724 [2024-07-25 12:10:42.756439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.724 [2024-07-25 12:10:42.756453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:05.724 [2024-07-25 12:10:42.756461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:05.724 [2024-07-25 12:10:42.756470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:05.724 [2024-07-25 12:10:42.756485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:05.724 [2024-07-25 12:10:42.756493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:05.724 [2024-07-25 12:10:42.756501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:05.724 [2024-07-25 12:10:42.756514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:05.724 [2024-07-25 12:10:42.756522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:05.724 [2024-07-25 12:10:42.756531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:05.724 [2024-07-25 12:10:42.756598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.724 [2024-07-25 12:10:42.756616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.724 [2024-07-25 12:10:42.756624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.724 [2024-07-25 12:10:42.756632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.724 [2024-07-25 12:10:42.756644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e610 (9): Bad file descriptor 00:24:05.724 [2024-07-25 12:10:42.756660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23afb50 (9): Bad file descriptor 00:24:05.724 [2024-07-25 12:10:42.756671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245a8a0 (9): Bad file descriptor 00:24:05.725 [2024-07-25 12:10:42.756683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2550ae0 (9): Bad file descriptor 00:24:05.725 [2024-07-25 12:10:42.756695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2557840 (9): Bad file descriptor 00:24:05.725 [2024-07-25 12:10:42.756748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:05.725 [2024-07-25 12:10:42.756759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:05.725 [2024-07-25 12:10:42.756769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:05.725 [2024-07-25 12:10:42.756782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:05.725 [2024-07-25 12:10:42.756790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:05.725 [2024-07-25 12:10:42.756800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:05.725 [2024-07-25 12:10:42.756812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:05.725 [2024-07-25 12:10:42.756820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:05.725 [2024-07-25 12:10:42.756829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:05.725 [2024-07-25 12:10:42.756840] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:05.725 [2024-07-25 12:10:42.756849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:05.725 [2024-07-25 12:10:42.756858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:05.725 [2024-07-25 12:10:42.756870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:05.725 [2024-07-25 12:10:42.756878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:05.725 [2024-07-25 12:10:42.756886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:05.725 [2024-07-25 12:10:42.757468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.725 [2024-07-25 12:10:42.757482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.725 [2024-07-25 12:10:42.757490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.725 [2024-07-25 12:10:42.757499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.725 [2024-07-25 12:10:42.757507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:05.984 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:05.984 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 15745 00:24:06.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (15745) - No such process 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.921 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.921 rmmod nvme_tcp 00:24:06.921 rmmod nvme_fabrics 00:24:06.921 rmmod nvme_keyring 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.180 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:09.082 00:24:09.082 real 0m8.667s 00:24:09.082 user 0m22.809s 00:24:09.082 sys 0m1.483s 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.082 ************************************ 00:24:09.082 END TEST nvmf_shutdown_tc3 00:24:09.082 ************************************ 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:09.082 00:24:09.082 real 0m34.302s 00:24:09.082 user 1m29.494s 00:24:09.082 sys 0m9.301s 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:09.082 ************************************ 00:24:09.082 END TEST nvmf_shutdown 00:24:09.082 ************************************ 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:24:09.082 00:24:09.082 real 12m11.672s 00:24:09.082 user 27m30.825s 00:24:09.082 sys 3m13.690s 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.082 12:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:09.082 ************************************ 00:24:09.082 END TEST nvmf_target_extra 00:24:09.082 ************************************ 00:24:09.340 12:10:46 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:09.340 12:10:46 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:09.340 12:10:46 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.340 12:10:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:09.340 ************************************ 00:24:09.340 START TEST nvmf_host 00:24:09.340 ************************************ 00:24:09.340 12:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:09.340 * Looking for test storage... 00:24:09.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.341 ************************************ 00:24:09.341 START TEST nvmf_multicontroller 00:24:09.341 ************************************ 00:24:09.341 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:09.600 * Looking for test storage... 00:24:09.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.600 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:09.601 12:10:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:14.872 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:14.872 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:14.872 Found net devices under 0000:af:00.0: cvl_0_0 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.872 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:14.873 Found net devices under 0000:af:00.1: cvl_0_1 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.873 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:24:15.131 00:24:15.131 --- 10.0.0.2 ping statistics --- 00:24:15.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.131 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:24:15.131 00:24:15.131 --- 10.0.0.1 ping statistics --- 00:24:15.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.131 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.131 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=20142 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 20142 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 20142 ']' 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.389 12:10:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.389 [2024-07-25 12:10:52.489065] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:15.389 [2024-07-25 12:10:52.489125] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.389 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.389 [2024-07-25 12:10:52.575655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:15.389 [2024-07-25 12:10:52.682805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.389 [2024-07-25 12:10:52.682852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.389 [2024-07-25 12:10:52.682865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.389 [2024-07-25 12:10:52.682876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.389 [2024-07-25 12:10:52.682891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.389 [2024-07-25 12:10:52.683425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.389 [2024-07-25 12:10:52.683518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.389 [2024-07-25 12:10:52.683520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 [2024-07-25 12:10:53.481082] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 Malloc0 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 [2024-07-25 12:10:53.552490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 [2024-07-25 12:10:53.560430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 Malloc1 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=20377 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 20377 /var/tmp/bdevperf.sock 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 20377 ']' 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.324 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.582 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.842 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.842 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:16.842 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:16.842 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.842 12:10:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.842 NVMe0n1 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.842 1 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.842 request: 00:24:16.842 { 00:24:16.842 "name": "NVMe0", 00:24:16.842 "trtype": "tcp", 00:24:16.842 "traddr": "10.0.0.2", 00:24:16.842 "adrfam": "ipv4", 00:24:16.842 "trsvcid": "4420", 00:24:16.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.842 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:16.842 "hostaddr": "10.0.0.2", 00:24:16.842 "hostsvcid": "60000", 00:24:16.842 "prchk_reftag": false, 00:24:16.842 "prchk_guard": false, 00:24:16.842 "hdgst": false, 00:24:16.842 "ddgst": false, 00:24:16.842 "method": "bdev_nvme_attach_controller", 00:24:16.842 "req_id": 1 00:24:16.842 } 00:24:16.842 Got JSON-RPC error response 00:24:16.842 response: 00:24:16.842 { 00:24:16.842 "code": -114, 00:24:16.842 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:16.842 } 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.842 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.842 request: 00:24:16.842 { 00:24:16.842 "name": "NVMe0", 00:24:16.842 "trtype": "tcp", 00:24:16.842 "traddr": "10.0.0.2", 00:24:16.842 "adrfam": "ipv4", 00:24:16.842 "trsvcid": "4420", 00:24:16.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:16.842 "hostaddr": "10.0.0.2", 00:24:16.842 "hostsvcid": "60000", 00:24:16.842 "prchk_reftag": false, 00:24:16.842 "prchk_guard": false, 00:24:16.842 "hdgst": false, 00:24:16.842 "ddgst": false, 00:24:16.842 "method": "bdev_nvme_attach_controller", 00:24:16.842 "req_id": 1 00:24:16.842 } 00:24:16.843 Got JSON-RPC error response 00:24:16.843 response: 00:24:16.843 { 00:24:16.843 "code": -114, 00:24:16.843 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:16.843 } 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.843 request: 00:24:16.843 { 00:24:16.843 "name": "NVMe0", 00:24:16.843 "trtype": "tcp", 00:24:16.843 "traddr": "10.0.0.2", 00:24:16.843 "adrfam": "ipv4", 00:24:16.843 "trsvcid": "4420", 00:24:16.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.843 "hostaddr": "10.0.0.2", 00:24:16.843 "hostsvcid": "60000", 00:24:16.843 "prchk_reftag": false, 00:24:16.843 "prchk_guard": false, 00:24:16.843 "hdgst": false, 00:24:16.843 "ddgst": false, 00:24:16.843 "multipath": "disable", 00:24:16.843 "method": "bdev_nvme_attach_controller", 00:24:16.843 "req_id": 1 00:24:16.843 } 00:24:16.843 Got JSON-RPC error response 00:24:16.843 response: 00:24:16.843 { 00:24:16.843 "code": -114, 00:24:16.843 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:16.843 } 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.843 request: 00:24:16.843 { 00:24:16.843 "name": "NVMe0", 00:24:16.843 "trtype": "tcp", 00:24:16.843 "traddr": "10.0.0.2", 00:24:16.843 "adrfam": "ipv4", 00:24:16.843 "trsvcid": "4420", 00:24:16.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.843 "hostaddr": "10.0.0.2", 00:24:16.843 "hostsvcid": "60000", 00:24:16.843 "prchk_reftag": false, 00:24:16.843 "prchk_guard": false, 00:24:16.843 "hdgst": false, 00:24:16.843 "ddgst": false, 00:24:16.843 "multipath": "failover", 00:24:16.843 "method": "bdev_nvme_attach_controller", 00:24:16.843 "req_id": 1 00:24:16.843 } 00:24:16.843 Got JSON-RPC error response 00:24:16.843 response: 00:24:16.843 { 00:24:16.843 "code": -114, 00:24:16.843 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:16.843 } 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.843 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.102 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.102 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.360 00:24:17.360 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.361 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:17.361 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:17.361 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.361 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.361 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.361 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:17.361 12:10:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:18.736 0 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 20377 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 20377 ']' 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 20377 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 20377 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 20377' 00:24:18.736 killing process with pid 20377 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 20377 00:24:18.736 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 20377 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:18.737 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:18.737 [2024-07-25 12:10:53.672014] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:18.737 [2024-07-25 12:10:53.672080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20377 ] 00:24:18.737 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.737 [2024-07-25 12:10:53.754551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.737 [2024-07-25 12:10:53.846889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.737 [2024-07-25 12:10:54.510145] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 5cb04f1f-ca45-4223-ba38-57ef6b9cff2b already exists 00:24:18.737 [2024-07-25 12:10:54.510180] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:5cb04f1f-ca45-4223-ba38-57ef6b9cff2b alias for bdev NVMe1n1 00:24:18.737 [2024-07-25 12:10:54.510191] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:18.737 Running I/O for 1 seconds... 00:24:18.737 00:24:18.737 Latency(us) 00:24:18.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.737 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:18.737 NVMe0n1 : 1.02 7879.13 30.78 0.00 0.00 16195.88 4170.47 29074.15 00:24:18.737 =================================================================================================================== 00:24:18.737 Total : 7879.13 30.78 0.00 0.00 16195.88 4170.47 29074.15 00:24:18.737 Received shutdown signal, test time was about 1.000000 seconds 00:24:18.737 00:24:18.737 Latency(us) 00:24:18.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.737 =================================================================================================================== 00:24:18.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.737 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:18.737 12:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:18.737 rmmod nvme_tcp 00:24:18.737 rmmod nvme_fabrics 00:24:18.737 rmmod nvme_keyring 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 20142 ']' 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 20142 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 20142 ']' 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 20142 00:24:18.995 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:18.996 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.996 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 20142 00:24:18.996 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:18.996 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:18.996 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 20142' 00:24:18.996 killing process with pid 20142 00:24:18.996 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 20142 00:24:18.996 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 20142 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.254 12:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.787 12:10:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.787 00:24:21.787 real 0m11.873s 00:24:21.787 user 0m15.251s 00:24:21.787 sys 0m5.130s 00:24:21.787 12:10:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:21.787 12:10:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:21.787 ************************************ 00:24:21.787 END TEST nvmf_multicontroller 00:24:21.787 ************************************ 00:24:21.787 12:10:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:21.787 12:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:21.787 12:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:21.787 12:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.788 ************************************ 00:24:21.788 START TEST nvmf_aer 00:24:21.788 ************************************ 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:21.788 * Looking for test storage... 00:24:21.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.788 12:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:27.058 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:27.058 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:27.058 Found net devices under 0000:af:00.0: cvl_0_0 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:27.058 Found net devices under 0000:af:00.1: cvl_0_1 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.058 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:27.318 00:24:27.318 --- 10.0.0.2 ping statistics --- 00:24:27.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.318 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:24:27.318 00:24:27.318 --- 10.0.0.1 ping statistics --- 00:24:27.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.318 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=24442 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 24442 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 24442 ']' 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.318 12:11:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.576 [2024-07-25 12:11:04.648591] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:27.576 [2024-07-25 12:11:04.648660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.576 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.576 [2024-07-25 12:11:04.739367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.576 [2024-07-25 12:11:04.833345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.576 [2024-07-25 12:11:04.833389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.576 [2024-07-25 12:11:04.833400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.576 [2024-07-25 12:11:04.833408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.576 [2024-07-25 12:11:04.833415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.576 [2024-07-25 12:11:04.833462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.576 [2024-07-25 12:11:04.834011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.576 [2024-07-25 12:11:04.834047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.576 [2024-07-25 12:11:04.834047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 [2024-07-25 12:11:05.564991] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 Malloc0 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 [2024-07-25 12:11:05.624737] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 [ 00:24:28.513 { 00:24:28.513 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:28.513 "subtype": "Discovery", 00:24:28.513 "listen_addresses": [], 00:24:28.513 "allow_any_host": true, 00:24:28.513 "hosts": [] 00:24:28.513 }, 00:24:28.513 { 00:24:28.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.513 "subtype": "NVMe", 00:24:28.513 "listen_addresses": [ 00:24:28.513 { 00:24:28.513 "trtype": "TCP", 00:24:28.513 "adrfam": "IPv4", 00:24:28.513 "traddr": "10.0.0.2", 00:24:28.513 "trsvcid": "4420" 00:24:28.513 } 00:24:28.513 ], 00:24:28.513 "allow_any_host": true, 00:24:28.513 "hosts": [], 00:24:28.513 "serial_number": "SPDK00000000000001", 00:24:28.513 "model_number": "SPDK bdev Controller", 00:24:28.513 "max_namespaces": 2, 00:24:28.513 "min_cntlid": 1, 00:24:28.513 "max_cntlid": 65519, 00:24:28.513 "namespaces": [ 00:24:28.513 { 00:24:28.513 "nsid": 1, 00:24:28.513 "bdev_name": "Malloc0", 00:24:28.513 "name": "Malloc0", 00:24:28.513 "nguid": "14D4E12DF64D48AEB75E7243F8958D09", 00:24:28.513 "uuid": "14d4e12d-f64d-48ae-b75e-7243f8958d09" 00:24:28.513 } 00:24:28.513 ] 00:24:28.513 } 00:24:28.513 ] 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=24653 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:28.513 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:28.513 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.772 Malloc1 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.772 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.772 [ 00:24:28.772 { 00:24:28.772 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:28.772 "subtype": "Discovery", 00:24:28.772 "listen_addresses": [], 00:24:28.772 "allow_any_host": true, 00:24:28.772 "hosts": [] 00:24:28.772 }, 00:24:28.772 { 00:24:28.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.772 "subtype": "NVMe", 00:24:28.772 "listen_addresses": [ 00:24:28.772 { 00:24:28.772 "trtype": "TCP", 00:24:28.772 "adrfam": "IPv4", 00:24:28.772 "traddr": "10.0.0.2", 00:24:28.772 "trsvcid": "4420" 00:24:28.772 } 00:24:28.772 ], 00:24:28.772 "allow_any_host": true, 00:24:28.772 "hosts": [], 00:24:28.772 "serial_number": "SPDK00000000000001", 00:24:28.772 "model_number": "SPDK bdev Controller", 00:24:28.772 "max_namespaces": 2, 00:24:28.772 "min_cntlid": 1, 00:24:28.772 "max_cntlid": 65519, 00:24:28.772 "namespaces": [ 00:24:28.772 { 00:24:28.772 "nsid": 1, 00:24:28.772 "bdev_name": "Malloc0", 00:24:28.772 "name": "Malloc0", 00:24:28.772 Asynchronous Event Request test 00:24:28.772 Attaching to 10.0.0.2 00:24:28.772 Attached to 10.0.0.2 00:24:28.772 Registering asynchronous event callbacks... 00:24:28.772 Starting namespace attribute notice tests for all controllers... 00:24:28.772 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:28.773 aer_cb - Changed Namespace 00:24:28.773 Cleaning up... 00:24:28.773 "nguid": "14D4E12DF64D48AEB75E7243F8958D09", 00:24:28.773 "uuid": "14d4e12d-f64d-48ae-b75e-7243f8958d09" 00:24:28.773 }, 00:24:28.773 { 00:24:28.773 "nsid": 2, 00:24:28.773 "bdev_name": "Malloc1", 00:24:28.773 "name": "Malloc1", 00:24:28.773 "nguid": "E4B043EC7FE8401A83511472B203C5BF", 00:24:28.773 "uuid": "e4b043ec-7fe8-401a-8351-1472b203c5bf" 00:24:28.773 } 00:24:28.773 ] 00:24:28.773 } 00:24:28.773 ] 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 24653 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.773 12:11:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.773 rmmod nvme_tcp 00:24:28.773 rmmod nvme_fabrics 00:24:28.773 rmmod nvme_keyring 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 24442 ']' 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 24442 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 24442 ']' 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 24442 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.773 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 24442 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 24442' 00:24:29.031 killing process with pid 24442 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 24442 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 24442 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.031 12:11:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.567 00:24:31.567 real 0m9.835s 00:24:31.567 user 0m7.595s 00:24:31.567 sys 0m4.937s 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.567 ************************************ 00:24:31.567 END TEST nvmf_aer 00:24:31.567 ************************************ 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.567 ************************************ 00:24:31.567 START TEST nvmf_async_init 00:24:31.567 ************************************ 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:31.567 * Looking for test storage... 00:24:31.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:31.567 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=11334070e7c04acfb8671880d3bce7d6 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.568 12:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.852 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:36.853 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:36.853 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:36.853 Found net devices under 0000:af:00.0: cvl_0_0 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:36.853 Found net devices under 0000:af:00.1: cvl_0_1 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.853 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:24:37.112 00:24:37.112 --- 10.0.0.2 ping statistics --- 00:24:37.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.112 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:24:37.112 00:24:37.112 --- 10.0.0.1 ping statistics --- 00:24:37.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.112 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.112 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=28370 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 28370 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 28370 ']' 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.371 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.371 [2024-07-25 12:11:14.501634] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:37.371 [2024-07-25 12:11:14.501689] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.371 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.371 [2024-07-25 12:11:14.586361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.630 [2024-07-25 12:11:14.678206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.630 [2024-07-25 12:11:14.678249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.630 [2024-07-25 12:11:14.678263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.630 [2024-07-25 12:11:14.678272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.630 [2024-07-25 12:11:14.678279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.630 [2024-07-25 12:11:14.678300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 [2024-07-25 12:11:14.822438] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 null0 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 11334070e7c04acfb8671880d3bce7d6 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 [2024-07-25 12:11:14.866692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.630 12:11:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.889 nvme0n1 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.889 [ 00:24:37.889 { 00:24:37.889 "name": "nvme0n1", 00:24:37.889 "aliases": [ 00:24:37.889 "11334070-e7c0-4acf-b867-1880d3bce7d6" 00:24:37.889 ], 00:24:37.889 "product_name": "NVMe disk", 00:24:37.889 "block_size": 512, 00:24:37.889 "num_blocks": 2097152, 00:24:37.889 "uuid": "11334070-e7c0-4acf-b867-1880d3bce7d6", 00:24:37.889 "assigned_rate_limits": { 00:24:37.889 "rw_ios_per_sec": 0, 00:24:37.889 "rw_mbytes_per_sec": 0, 00:24:37.889 "r_mbytes_per_sec": 0, 00:24:37.889 "w_mbytes_per_sec": 0 00:24:37.889 }, 00:24:37.889 "claimed": false, 00:24:37.889 "zoned": false, 00:24:37.889 "supported_io_types": { 00:24:37.889 "read": true, 00:24:37.889 "write": true, 00:24:37.889 "unmap": false, 00:24:37.889 "flush": true, 00:24:37.889 "reset": true, 00:24:37.889 "nvme_admin": true, 00:24:37.889 "nvme_io": true, 00:24:37.889 "nvme_io_md": false, 00:24:37.889 "write_zeroes": true, 00:24:37.889 "zcopy": false, 00:24:37.889 "get_zone_info": false, 00:24:37.889 "zone_management": false, 00:24:37.889 "zone_append": false, 00:24:37.889 "compare": true, 00:24:37.889 "compare_and_write": true, 00:24:37.889 "abort": true, 00:24:37.889 "seek_hole": false, 00:24:37.889 "seek_data": false, 00:24:37.889 "copy": true, 00:24:37.889 "nvme_iov_md": false 00:24:37.889 }, 00:24:37.889 "memory_domains": [ 00:24:37.889 { 00:24:37.889 "dma_device_id": "system", 00:24:37.889 "dma_device_type": 1 00:24:37.889 } 00:24:37.889 ], 00:24:37.889 "driver_specific": { 00:24:37.889 "nvme": [ 00:24:37.889 { 00:24:37.889 "trid": { 00:24:37.889 "trtype": "TCP", 00:24:37.889 "adrfam": "IPv4", 00:24:37.889 "traddr": "10.0.0.2", 00:24:37.889 "trsvcid": "4420", 00:24:37.889 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:37.889 }, 00:24:37.889 "ctrlr_data": { 00:24:37.889 "cntlid": 1, 00:24:37.889 "vendor_id": "0x8086", 00:24:37.889 "model_number": "SPDK bdev Controller", 00:24:37.889 "serial_number": "00000000000000000000", 00:24:37.889 "firmware_revision": "24.09", 00:24:37.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:37.889 "oacs": { 00:24:37.889 "security": 0, 00:24:37.889 "format": 0, 00:24:37.889 "firmware": 0, 00:24:37.889 "ns_manage": 0 00:24:37.889 }, 00:24:37.889 "multi_ctrlr": true, 00:24:37.889 "ana_reporting": false 00:24:37.889 }, 00:24:37.889 "vs": { 00:24:37.889 "nvme_version": "1.3" 00:24:37.889 }, 00:24:37.889 "ns_data": { 00:24:37.889 "id": 1, 00:24:37.889 "can_share": true 00:24:37.889 } 00:24:37.889 } 00:24:37.889 ], 00:24:37.889 "mp_policy": "active_passive" 00:24:37.889 } 00:24:37.889 } 00:24:37.889 ] 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.889 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.889 [2024-07-25 12:11:15.128856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:37.889 [2024-07-25 12:11:15.128929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539b00 (9): Bad file descriptor 00:24:38.148 [2024-07-25 12:11:15.260724] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.148 [ 00:24:38.148 { 00:24:38.148 "name": "nvme0n1", 00:24:38.148 "aliases": [ 00:24:38.148 "11334070-e7c0-4acf-b867-1880d3bce7d6" 00:24:38.148 ], 00:24:38.148 "product_name": "NVMe disk", 00:24:38.148 "block_size": 512, 00:24:38.148 "num_blocks": 2097152, 00:24:38.148 "uuid": "11334070-e7c0-4acf-b867-1880d3bce7d6", 00:24:38.148 "assigned_rate_limits": { 00:24:38.148 "rw_ios_per_sec": 0, 00:24:38.148 "rw_mbytes_per_sec": 0, 00:24:38.148 "r_mbytes_per_sec": 0, 00:24:38.148 "w_mbytes_per_sec": 0 00:24:38.148 }, 00:24:38.148 "claimed": false, 00:24:38.148 "zoned": false, 00:24:38.148 "supported_io_types": { 00:24:38.148 "read": true, 00:24:38.148 "write": true, 00:24:38.148 "unmap": false, 00:24:38.148 "flush": true, 00:24:38.148 "reset": true, 00:24:38.148 "nvme_admin": true, 00:24:38.148 "nvme_io": true, 00:24:38.148 "nvme_io_md": false, 00:24:38.148 "write_zeroes": true, 00:24:38.148 "zcopy": false, 00:24:38.148 "get_zone_info": false, 00:24:38.148 "zone_management": false, 00:24:38.148 "zone_append": false, 00:24:38.148 "compare": true, 00:24:38.148 "compare_and_write": true, 00:24:38.148 "abort": true, 00:24:38.148 "seek_hole": false, 00:24:38.148 "seek_data": false, 00:24:38.148 "copy": true, 00:24:38.148 "nvme_iov_md": false 00:24:38.148 }, 00:24:38.148 "memory_domains": [ 00:24:38.148 { 00:24:38.148 "dma_device_id": "system", 00:24:38.148 "dma_device_type": 1 00:24:38.148 } 00:24:38.148 ], 00:24:38.148 "driver_specific": { 00:24:38.148 "nvme": [ 00:24:38.148 { 00:24:38.148 "trid": { 00:24:38.148 "trtype": "TCP", 00:24:38.148 "adrfam": "IPv4", 00:24:38.148 "traddr": "10.0.0.2", 00:24:38.148 "trsvcid": "4420", 00:24:38.148 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:38.148 }, 00:24:38.148 "ctrlr_data": { 00:24:38.148 "cntlid": 2, 00:24:38.148 "vendor_id": "0x8086", 00:24:38.148 "model_number": "SPDK bdev Controller", 00:24:38.148 "serial_number": "00000000000000000000", 00:24:38.148 "firmware_revision": "24.09", 00:24:38.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:38.148 "oacs": { 00:24:38.148 "security": 0, 00:24:38.148 "format": 0, 00:24:38.148 "firmware": 0, 00:24:38.148 "ns_manage": 0 00:24:38.148 }, 00:24:38.148 "multi_ctrlr": true, 00:24:38.148 "ana_reporting": false 00:24:38.148 }, 00:24:38.148 "vs": { 00:24:38.148 "nvme_version": "1.3" 00:24:38.148 }, 00:24:38.148 "ns_data": { 00:24:38.148 "id": 1, 00:24:38.148 "can_share": true 00:24:38.148 } 00:24:38.148 } 00:24:38.148 ], 00:24:38.148 "mp_policy": "active_passive" 00:24:38.148 } 00:24:38.148 } 00:24:38.148 ] 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zOjl1qpqQJ 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zOjl1qpqQJ 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.148 [2024-07-25 12:11:15.321508] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.148 [2024-07-25 12:11:15.321647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zOjl1qpqQJ 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.148 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.149 [2024-07-25 12:11:15.329522] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zOjl1qpqQJ 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.149 [2024-07-25 12:11:15.337570] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.149 [2024-07-25 12:11:15.337623] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:38.149 nvme0n1 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.149 [ 00:24:38.149 { 00:24:38.149 "name": "nvme0n1", 00:24:38.149 "aliases": [ 00:24:38.149 "11334070-e7c0-4acf-b867-1880d3bce7d6" 00:24:38.149 ], 00:24:38.149 "product_name": "NVMe disk", 00:24:38.149 "block_size": 512, 00:24:38.149 "num_blocks": 2097152, 00:24:38.149 "uuid": "11334070-e7c0-4acf-b867-1880d3bce7d6", 00:24:38.149 "assigned_rate_limits": { 00:24:38.149 "rw_ios_per_sec": 0, 00:24:38.149 "rw_mbytes_per_sec": 0, 00:24:38.149 "r_mbytes_per_sec": 0, 00:24:38.149 "w_mbytes_per_sec": 0 00:24:38.149 }, 00:24:38.149 "claimed": false, 00:24:38.149 "zoned": false, 00:24:38.149 "supported_io_types": { 00:24:38.149 "read": true, 00:24:38.149 "write": true, 00:24:38.149 "unmap": false, 00:24:38.149 "flush": true, 00:24:38.149 "reset": true, 00:24:38.149 "nvme_admin": true, 00:24:38.149 "nvme_io": true, 00:24:38.149 "nvme_io_md": false, 00:24:38.149 "write_zeroes": true, 00:24:38.149 "zcopy": false, 00:24:38.149 "get_zone_info": false, 00:24:38.149 "zone_management": false, 00:24:38.149 "zone_append": false, 00:24:38.149 "compare": true, 00:24:38.149 "compare_and_write": true, 00:24:38.149 "abort": true, 00:24:38.149 "seek_hole": false, 00:24:38.149 "seek_data": false, 00:24:38.149 "copy": true, 00:24:38.149 "nvme_iov_md": false 00:24:38.149 }, 00:24:38.149 "memory_domains": [ 00:24:38.149 { 00:24:38.149 "dma_device_id": "system", 00:24:38.149 "dma_device_type": 1 00:24:38.149 } 00:24:38.149 ], 00:24:38.149 "driver_specific": { 00:24:38.149 "nvme": [ 00:24:38.149 { 00:24:38.149 "trid": { 00:24:38.149 "trtype": "TCP", 00:24:38.149 "adrfam": "IPv4", 00:24:38.149 "traddr": "10.0.0.2", 00:24:38.149 "trsvcid": "4421", 00:24:38.149 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:38.149 }, 00:24:38.149 "ctrlr_data": { 00:24:38.149 "cntlid": 3, 00:24:38.149 "vendor_id": "0x8086", 00:24:38.149 "model_number": "SPDK bdev Controller", 00:24:38.149 "serial_number": "00000000000000000000", 00:24:38.149 "firmware_revision": "24.09", 00:24:38.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:38.149 "oacs": { 00:24:38.149 "security": 0, 00:24:38.149 "format": 0, 00:24:38.149 "firmware": 0, 00:24:38.149 "ns_manage": 0 00:24:38.149 }, 00:24:38.149 "multi_ctrlr": true, 00:24:38.149 "ana_reporting": false 00:24:38.149 }, 00:24:38.149 "vs": { 00:24:38.149 "nvme_version": "1.3" 00:24:38.149 }, 00:24:38.149 "ns_data": { 00:24:38.149 "id": 1, 00:24:38.149 "can_share": true 00:24:38.149 } 00:24:38.149 } 00:24:38.149 ], 00:24:38.149 "mp_policy": "active_passive" 00:24:38.149 } 00:24:38.149 } 00:24:38.149 ] 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.zOjl1qpqQJ 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:38.149 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.407 rmmod nvme_tcp 00:24:38.407 rmmod nvme_fabrics 00:24:38.407 rmmod nvme_keyring 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:38.407 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 28370 ']' 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 28370 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 28370 ']' 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 28370 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 28370 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 28370' 00:24:38.408 killing process with pid 28370 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 28370 00:24:38.408 [2024-07-25 12:11:15.562747] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:38.408 [2024-07-25 12:11:15.562777] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:38.408 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 28370 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.666 12:11:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.570 12:11:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.570 00:24:40.570 real 0m9.365s 00:24:40.570 user 0m3.056s 00:24:40.570 sys 0m4.752s 00:24:40.570 12:11:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.570 12:11:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.570 ************************************ 00:24:40.570 END TEST nvmf_async_init 00:24:40.570 ************************************ 00:24:40.570 12:11:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:40.570 12:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.570 12:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.570 12:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.829 ************************************ 00:24:40.829 START TEST dma 00:24:40.829 ************************************ 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:40.829 * Looking for test storage... 00:24:40.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.829 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:40.830 00:24:40.830 real 0m0.101s 00:24:40.830 user 0m0.044s 00:24:40.830 sys 0m0.064s 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.830 12:11:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:40.830 ************************************ 00:24:40.830 END TEST dma 00:24:40.830 ************************************ 00:24:40.830 12:11:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:40.830 12:11:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.830 12:11:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.830 12:11:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.830 ************************************ 00:24:40.830 START TEST nvmf_identify 00:24:40.830 ************************************ 00:24:40.830 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:41.089 * Looking for test storage... 00:24:41.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.089 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.090 12:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:47.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.656 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:47.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:47.657 Found net devices under 0000:af:00.0: cvl_0_0 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:47.657 Found net devices under 0000:af:00.1: cvl_0_1 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:47.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:24:47.657 00:24:47.657 --- 10.0.0.2 ping statistics --- 00:24:47.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.657 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:24:47.657 00:24:47.657 --- 10.0.0.1 ping statistics --- 00:24:47.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.657 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:47.657 12:11:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=32152 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 32152 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 32152 ']' 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.657 12:11:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.657 [2024-07-25 12:11:24.058992] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:47.657 [2024-07-25 12:11:24.059045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.657 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.657 [2024-07-25 12:11:24.146067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.657 [2024-07-25 12:11:24.241068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.657 [2024-07-25 12:11:24.241109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.657 [2024-07-25 12:11:24.241119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.657 [2024-07-25 12:11:24.241128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.657 [2024-07-25 12:11:24.241135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.657 [2024-07-25 12:11:24.241176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.657 [2024-07-25 12:11:24.241290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.657 [2024-07-25 12:11:24.241402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.657 [2024-07-25 12:11:24.241403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 [2024-07-25 12:11:25.017190] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 Malloc0 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 [2024-07-25 12:11:25.113331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.916 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.916 [ 00:24:47.916 { 00:24:47.916 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:47.916 "subtype": "Discovery", 00:24:47.916 "listen_addresses": [ 00:24:47.916 { 00:24:47.916 "trtype": "TCP", 00:24:47.916 "adrfam": "IPv4", 00:24:47.916 "traddr": "10.0.0.2", 00:24:47.916 "trsvcid": "4420" 00:24:47.916 } 00:24:47.916 ], 00:24:47.916 "allow_any_host": true, 00:24:47.916 "hosts": [] 00:24:47.916 }, 00:24:47.916 { 00:24:47.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.916 "subtype": "NVMe", 00:24:47.916 "listen_addresses": [ 00:24:47.916 { 00:24:47.916 "trtype": "TCP", 00:24:47.916 "adrfam": "IPv4", 00:24:47.916 "traddr": "10.0.0.2", 00:24:47.916 "trsvcid": "4420" 00:24:47.916 } 00:24:47.916 ], 00:24:47.916 "allow_any_host": true, 00:24:47.916 "hosts": [], 00:24:47.916 "serial_number": "SPDK00000000000001", 00:24:47.916 "model_number": "SPDK bdev Controller", 00:24:47.916 "max_namespaces": 32, 00:24:47.916 "min_cntlid": 1, 00:24:47.916 "max_cntlid": 65519, 00:24:47.916 "namespaces": [ 00:24:47.916 { 00:24:47.916 "nsid": 1, 00:24:47.916 "bdev_name": "Malloc0", 00:24:47.916 "name": "Malloc0", 00:24:47.916 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:47.916 "eui64": "ABCDEF0123456789", 00:24:47.917 "uuid": "7acb017a-0b77-404a-864f-ed16235d14b8" 00:24:47.917 } 00:24:47.917 ] 00:24:47.917 } 00:24:47.917 ] 00:24:47.917 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.917 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:47.917 [2024-07-25 12:11:25.163884] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:47.917 [2024-07-25 12:11:25.163919] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32435 ] 00:24:47.917 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.917 [2024-07-25 12:11:25.200135] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:47.917 [2024-07-25 12:11:25.200194] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.917 [2024-07-25 12:11:25.200201] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.917 [2024-07-25 12:11:25.200213] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.917 [2024-07-25 12:11:25.200224] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.917 [2024-07-25 12:11:25.200644] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:47.917 [2024-07-25 12:11:25.200678] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2027ec0 0 00:24:47.917 [2024-07-25 12:11:25.214611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.917 [2024-07-25 12:11:25.214632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.917 [2024-07-25 12:11:25.214638] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.917 [2024-07-25 12:11:25.214645] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.917 [2024-07-25 12:11:25.214694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.917 [2024-07-25 12:11:25.214701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.917 [2024-07-25 12:11:25.214707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:47.917 [2024-07-25 12:11:25.214723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.917 [2024-07-25 12:11:25.214743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.177 [2024-07-25 12:11:25.222614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.177 [2024-07-25 12:11:25.222627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.177 [2024-07-25 12:11:25.222631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.222637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.177 [2024-07-25 12:11:25.222650] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:48.177 [2024-07-25 12:11:25.222659] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:48.177 [2024-07-25 12:11:25.222665] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:48.177 [2024-07-25 12:11:25.222683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.222688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.222692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.177 [2024-07-25 12:11:25.222702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.177 [2024-07-25 12:11:25.222719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.177 [2024-07-25 12:11:25.222948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.177 [2024-07-25 12:11:25.222957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.177 [2024-07-25 12:11:25.222961] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.222967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.177 [2024-07-25 12:11:25.222976] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:48.177 [2024-07-25 12:11:25.222986] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:48.177 [2024-07-25 12:11:25.222996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.177 [2024-07-25 12:11:25.223014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.177 [2024-07-25 12:11:25.223028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.177 [2024-07-25 12:11:25.223142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.177 [2024-07-25 12:11:25.223151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.177 [2024-07-25 12:11:25.223155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.177 [2024-07-25 12:11:25.223167] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:48.177 [2024-07-25 12:11:25.223177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:48.177 [2024-07-25 12:11:25.223189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.177 [2024-07-25 12:11:25.223207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.177 [2024-07-25 12:11:25.223221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.177 [2024-07-25 12:11:25.223333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.177 [2024-07-25 12:11:25.223342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.177 [2024-07-25 12:11:25.223346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.177 [2024-07-25 12:11:25.223357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:48.177 [2024-07-25 12:11:25.223369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.177 [2024-07-25 12:11:25.223379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.177 [2024-07-25 12:11:25.223387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.177 [2024-07-25 12:11:25.223401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.177 [2024-07-25 12:11:25.223518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.177 [2024-07-25 12:11:25.223526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.178 [2024-07-25 12:11:25.223531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.223535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.178 [2024-07-25 12:11:25.223541] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:48.178 [2024-07-25 12:11:25.223547] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:48.178 [2024-07-25 12:11:25.223558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:48.178 [2024-07-25 12:11:25.223665] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:48.178 [2024-07-25 12:11:25.223672] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:48.178 [2024-07-25 12:11:25.223684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.223689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.223693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.223702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.178 [2024-07-25 12:11:25.223716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.178 [2024-07-25 12:11:25.223838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.178 [2024-07-25 12:11:25.223847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.178 [2024-07-25 12:11:25.223851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.223856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.178 [2024-07-25 12:11:25.223865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:48.178 [2024-07-25 12:11:25.223877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.223882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.223887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.223896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.178 [2024-07-25 12:11:25.223908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.178 [2024-07-25 12:11:25.224026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.178 [2024-07-25 12:11:25.224034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.178 [2024-07-25 12:11:25.224039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.224044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.178 [2024-07-25 12:11:25.224049] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:48.178 [2024-07-25 12:11:25.224056] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:48.178 [2024-07-25 12:11:25.224066] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:48.178 [2024-07-25 12:11:25.224076] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:48.178 [2024-07-25 12:11:25.224088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.224092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.224101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.178 [2024-07-25 12:11:25.224116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.178 [2024-07-25 12:11:25.224272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.178 [2024-07-25 12:11:25.224280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.178 [2024-07-25 12:11:25.224285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.224290] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2027ec0): datao=0, datal=4096, cccid=0 00:24:48.178 [2024-07-25 12:11:25.224296] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20aae40) on tqpair(0x2027ec0): expected_datao=0, payload_size=4096 00:24:48.178 [2024-07-25 12:11:25.224302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.224355] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.224361] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.265804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.178 [2024-07-25 12:11:25.265823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.178 [2024-07-25 12:11:25.265828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.265834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.178 [2024-07-25 12:11:25.265844] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:48.178 [2024-07-25 12:11:25.265851] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:48.178 [2024-07-25 12:11:25.265857] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:48.178 [2024-07-25 12:11:25.265868] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:48.178 [2024-07-25 12:11:25.265874] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:48.178 [2024-07-25 12:11:25.265880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:48.178 [2024-07-25 12:11:25.265891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:48.178 [2024-07-25 12:11:25.265905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.265910] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.265915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.265926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:48.178 [2024-07-25 12:11:25.265943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.178 [2024-07-25 12:11:25.266084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.178 [2024-07-25 12:11:25.266092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.178 [2024-07-25 12:11:25.266097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.178 [2024-07-25 12:11:25.266111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.266128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.178 [2024-07-25 12:11:25.266136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266145] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.266152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.178 [2024-07-25 12:11:25.266160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.266176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.178 [2024-07-25 12:11:25.266184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.266200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.178 [2024-07-25 12:11:25.266206] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:48.178 [2024-07-25 12:11:25.266220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:48.178 [2024-07-25 12:11:25.266228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2027ec0) 00:24:48.178 [2024-07-25 12:11:25.266244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.178 [2024-07-25 12:11:25.266259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aae40, cid 0, qid 0 00:24:48.178 [2024-07-25 12:11:25.266266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20aafc0, cid 1, qid 0 00:24:48.178 [2024-07-25 12:11:25.266272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab140, cid 2, qid 0 00:24:48.178 [2024-07-25 12:11:25.266278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab2c0, cid 3, qid 0 00:24:48.178 [2024-07-25 12:11:25.266284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab440, cid 4, qid 0 00:24:48.178 [2024-07-25 12:11:25.266487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.178 [2024-07-25 12:11:25.266495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.178 [2024-07-25 12:11:25.266500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.178 [2024-07-25 12:11:25.266505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab440) on tqpair=0x2027ec0 00:24:48.178 [2024-07-25 12:11:25.266512] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:48.178 [2024-07-25 12:11:25.266518] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:48.179 [2024-07-25 12:11:25.266532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2027ec0) 00:24:48.179 [2024-07-25 12:11:25.266546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.179 [2024-07-25 12:11:25.266559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab440, cid 4, qid 0 00:24:48.179 [2024-07-25 12:11:25.266701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.179 [2024-07-25 12:11:25.266710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.179 [2024-07-25 12:11:25.266714] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266719] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2027ec0): datao=0, datal=4096, cccid=4 00:24:48.179 [2024-07-25 12:11:25.266724] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ab440) on tqpair(0x2027ec0): expected_datao=0, payload_size=4096 00:24:48.179 [2024-07-25 12:11:25.266730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266802] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266807] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.179 [2024-07-25 12:11:25.266882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.179 [2024-07-25 12:11:25.266886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab440) on tqpair=0x2027ec0 00:24:48.179 [2024-07-25 12:11:25.266906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:48.179 [2024-07-25 12:11:25.266933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2027ec0) 00:24:48.179 [2024-07-25 12:11:25.266948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.179 [2024-07-25 12:11:25.266957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.266969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2027ec0) 00:24:48.179 [2024-07-25 12:11:25.266977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.179 [2024-07-25 12:11:25.266995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab440, cid 4, qid 0 00:24:48.179 [2024-07-25 12:11:25.267002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab5c0, cid 5, qid 0 00:24:48.179 [2024-07-25 12:11:25.267198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.179 [2024-07-25 12:11:25.267206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.179 [2024-07-25 12:11:25.267210] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.267215] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2027ec0): datao=0, datal=1024, cccid=4 00:24:48.179 [2024-07-25 12:11:25.267220] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ab440) on tqpair(0x2027ec0): expected_datao=0, payload_size=1024 00:24:48.179 [2024-07-25 12:11:25.267226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.267234] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.267239] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.267246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.179 [2024-07-25 12:11:25.267253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.179 [2024-07-25 12:11:25.267258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.267262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab5c0) on tqpair=0x2027ec0 00:24:48.179 [2024-07-25 12:11:25.307841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.179 [2024-07-25 12:11:25.307854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.179 [2024-07-25 12:11:25.307859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.307864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab440) on tqpair=0x2027ec0 00:24:48.179 [2024-07-25 12:11:25.307886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.307891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2027ec0) 00:24:48.179 [2024-07-25 12:11:25.307901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.179 [2024-07-25 12:11:25.307921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab440, cid 4, qid 0 00:24:48.179 [2024-07-25 12:11:25.308099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.179 [2024-07-25 12:11:25.308107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.179 [2024-07-25 12:11:25.308111] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.308116] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2027ec0): datao=0, datal=3072, cccid=4 00:24:48.179 [2024-07-25 12:11:25.308122] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ab440) on tqpair(0x2027ec0): expected_datao=0, payload_size=3072 00:24:48.179 [2024-07-25 12:11:25.308127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.308158] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.308163] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.348818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.179 [2024-07-25 12:11:25.348831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.179 [2024-07-25 12:11:25.348835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.348840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab440) on tqpair=0x2027ec0 00:24:48.179 [2024-07-25 12:11:25.348856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.348861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2027ec0) 00:24:48.179 [2024-07-25 12:11:25.348870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.179 [2024-07-25 12:11:25.348890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab440, cid 4, qid 0 00:24:48.179 [2024-07-25 12:11:25.349045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.179 [2024-07-25 12:11:25.349053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.179 [2024-07-25 12:11:25.349058] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.349062] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2027ec0): datao=0, datal=8, cccid=4 00:24:48.179 [2024-07-25 12:11:25.349068] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20ab440) on tqpair(0x2027ec0): expected_datao=0, payload_size=8 00:24:48.179 [2024-07-25 12:11:25.349073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.349081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.349086] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.389830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.179 [2024-07-25 12:11:25.389842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.179 [2024-07-25 12:11:25.389846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.179 [2024-07-25 12:11:25.389851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab440) on tqpair=0x2027ec0 00:24:48.179 ===================================================== 00:24:48.179 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:48.179 ===================================================== 00:24:48.179 Controller Capabilities/Features 00:24:48.179 ================================ 00:24:48.179 Vendor ID: 0000 00:24:48.179 Subsystem Vendor ID: 0000 00:24:48.179 Serial Number: .................... 00:24:48.179 Model Number: ........................................ 00:24:48.179 Firmware Version: 24.09 00:24:48.179 Recommended Arb Burst: 0 00:24:48.179 IEEE OUI Identifier: 00 00 00 00:24:48.179 Multi-path I/O 00:24:48.179 May have multiple subsystem ports: No 00:24:48.179 May have multiple controllers: No 00:24:48.179 Associated with SR-IOV VF: No 00:24:48.179 Max Data Transfer Size: 131072 00:24:48.179 Max Number of Namespaces: 0 00:24:48.179 Max Number of I/O Queues: 1024 00:24:48.179 NVMe Specification Version (VS): 1.3 00:24:48.179 NVMe Specification Version (Identify): 1.3 00:24:48.179 Maximum Queue Entries: 128 00:24:48.179 Contiguous Queues Required: Yes 00:24:48.179 Arbitration Mechanisms Supported 00:24:48.179 Weighted Round Robin: Not Supported 00:24:48.179 Vendor Specific: Not Supported 00:24:48.179 Reset Timeout: 15000 ms 00:24:48.179 Doorbell Stride: 4 bytes 00:24:48.179 NVM Subsystem Reset: Not Supported 00:24:48.179 Command Sets Supported 00:24:48.179 NVM Command Set: Supported 00:24:48.179 Boot Partition: Not Supported 00:24:48.179 Memory Page Size Minimum: 4096 bytes 00:24:48.179 Memory Page Size Maximum: 4096 bytes 00:24:48.179 Persistent Memory Region: Not Supported 00:24:48.179 Optional Asynchronous Events Supported 00:24:48.179 Namespace Attribute Notices: Not Supported 00:24:48.179 Firmware Activation Notices: Not Supported 00:24:48.179 ANA Change Notices: Not Supported 00:24:48.179 PLE Aggregate Log Change Notices: Not Supported 00:24:48.179 LBA Status Info Alert Notices: Not Supported 00:24:48.179 EGE Aggregate Log Change Notices: Not Supported 00:24:48.179 Normal NVM Subsystem Shutdown event: Not Supported 00:24:48.179 Zone Descriptor Change Notices: Not Supported 00:24:48.179 Discovery Log Change Notices: Supported 00:24:48.179 Controller Attributes 00:24:48.180 128-bit Host Identifier: Not Supported 00:24:48.180 Non-Operational Permissive Mode: Not Supported 00:24:48.180 NVM Sets: Not Supported 00:24:48.180 Read Recovery Levels: Not Supported 00:24:48.180 Endurance Groups: Not Supported 00:24:48.180 Predictable Latency Mode: Not Supported 00:24:48.180 Traffic Based Keep ALive: Not Supported 00:24:48.180 Namespace Granularity: Not Supported 00:24:48.180 SQ Associations: Not Supported 00:24:48.180 UUID List: Not Supported 00:24:48.180 Multi-Domain Subsystem: Not Supported 00:24:48.180 Fixed Capacity Management: Not Supported 00:24:48.180 Variable Capacity Management: Not Supported 00:24:48.180 Delete Endurance Group: Not Supported 00:24:48.180 Delete NVM Set: Not Supported 00:24:48.180 Extended LBA Formats Supported: Not Supported 00:24:48.180 Flexible Data Placement Supported: Not Supported 00:24:48.180 00:24:48.180 Controller Memory Buffer Support 00:24:48.180 ================================ 00:24:48.180 Supported: No 00:24:48.180 00:24:48.180 Persistent Memory Region Support 00:24:48.180 ================================ 00:24:48.180 Supported: No 00:24:48.180 00:24:48.180 Admin Command Set Attributes 00:24:48.180 ============================ 00:24:48.180 Security Send/Receive: Not Supported 00:24:48.180 Format NVM: Not Supported 00:24:48.180 Firmware Activate/Download: Not Supported 00:24:48.180 Namespace Management: Not Supported 00:24:48.180 Device Self-Test: Not Supported 00:24:48.180 Directives: Not Supported 00:24:48.180 NVMe-MI: Not Supported 00:24:48.180 Virtualization Management: Not Supported 00:24:48.180 Doorbell Buffer Config: Not Supported 00:24:48.180 Get LBA Status Capability: Not Supported 00:24:48.180 Command & Feature Lockdown Capability: Not Supported 00:24:48.180 Abort Command Limit: 1 00:24:48.180 Async Event Request Limit: 4 00:24:48.180 Number of Firmware Slots: N/A 00:24:48.180 Firmware Slot 1 Read-Only: N/A 00:24:48.180 Firmware Activation Without Reset: N/A 00:24:48.180 Multiple Update Detection Support: N/A 00:24:48.180 Firmware Update Granularity: No Information Provided 00:24:48.180 Per-Namespace SMART Log: No 00:24:48.180 Asymmetric Namespace Access Log Page: Not Supported 00:24:48.180 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:48.180 Command Effects Log Page: Not Supported 00:24:48.180 Get Log Page Extended Data: Supported 00:24:48.180 Telemetry Log Pages: Not Supported 00:24:48.180 Persistent Event Log Pages: Not Supported 00:24:48.180 Supported Log Pages Log Page: May Support 00:24:48.180 Commands Supported & Effects Log Page: Not Supported 00:24:48.180 Feature Identifiers & Effects Log Page:May Support 00:24:48.180 NVMe-MI Commands & Effects Log Page: May Support 00:24:48.180 Data Area 4 for Telemetry Log: Not Supported 00:24:48.180 Error Log Page Entries Supported: 128 00:24:48.180 Keep Alive: Not Supported 00:24:48.180 00:24:48.180 NVM Command Set Attributes 00:24:48.180 ========================== 00:24:48.180 Submission Queue Entry Size 00:24:48.180 Max: 1 00:24:48.180 Min: 1 00:24:48.180 Completion Queue Entry Size 00:24:48.180 Max: 1 00:24:48.180 Min: 1 00:24:48.180 Number of Namespaces: 0 00:24:48.180 Compare Command: Not Supported 00:24:48.180 Write Uncorrectable Command: Not Supported 00:24:48.180 Dataset Management Command: Not Supported 00:24:48.180 Write Zeroes Command: Not Supported 00:24:48.180 Set Features Save Field: Not Supported 00:24:48.180 Reservations: Not Supported 00:24:48.180 Timestamp: Not Supported 00:24:48.180 Copy: Not Supported 00:24:48.180 Volatile Write Cache: Not Present 00:24:48.180 Atomic Write Unit (Normal): 1 00:24:48.180 Atomic Write Unit (PFail): 1 00:24:48.180 Atomic Compare & Write Unit: 1 00:24:48.180 Fused Compare & Write: Supported 00:24:48.180 Scatter-Gather List 00:24:48.180 SGL Command Set: Supported 00:24:48.180 SGL Keyed: Supported 00:24:48.180 SGL Bit Bucket Descriptor: Not Supported 00:24:48.180 SGL Metadata Pointer: Not Supported 00:24:48.180 Oversized SGL: Not Supported 00:24:48.180 SGL Metadata Address: Not Supported 00:24:48.180 SGL Offset: Supported 00:24:48.180 Transport SGL Data Block: Not Supported 00:24:48.180 Replay Protected Memory Block: Not Supported 00:24:48.180 00:24:48.180 Firmware Slot Information 00:24:48.180 ========================= 00:24:48.180 Active slot: 0 00:24:48.180 00:24:48.180 00:24:48.180 Error Log 00:24:48.180 ========= 00:24:48.180 00:24:48.180 Active Namespaces 00:24:48.180 ================= 00:24:48.180 Discovery Log Page 00:24:48.180 ================== 00:24:48.180 Generation Counter: 2 00:24:48.180 Number of Records: 2 00:24:48.180 Record Format: 0 00:24:48.180 00:24:48.180 Discovery Log Entry 0 00:24:48.180 ---------------------- 00:24:48.180 Transport Type: 3 (TCP) 00:24:48.180 Address Family: 1 (IPv4) 00:24:48.180 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:48.180 Entry Flags: 00:24:48.180 Duplicate Returned Information: 1 00:24:48.180 Explicit Persistent Connection Support for Discovery: 1 00:24:48.180 Transport Requirements: 00:24:48.180 Secure Channel: Not Required 00:24:48.180 Port ID: 0 (0x0000) 00:24:48.180 Controller ID: 65535 (0xffff) 00:24:48.180 Admin Max SQ Size: 128 00:24:48.180 Transport Service Identifier: 4420 00:24:48.180 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:48.180 Transport Address: 10.0.0.2 00:24:48.180 Discovery Log Entry 1 00:24:48.180 ---------------------- 00:24:48.180 Transport Type: 3 (TCP) 00:24:48.180 Address Family: 1 (IPv4) 00:24:48.180 Subsystem Type: 2 (NVM Subsystem) 00:24:48.180 Entry Flags: 00:24:48.180 Duplicate Returned Information: 0 00:24:48.180 Explicit Persistent Connection Support for Discovery: 0 00:24:48.180 Transport Requirements: 00:24:48.180 Secure Channel: Not Required 00:24:48.180 Port ID: 0 (0x0000) 00:24:48.180 Controller ID: 65535 (0xffff) 00:24:48.180 Admin Max SQ Size: 128 00:24:48.180 Transport Service Identifier: 4420 00:24:48.180 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:48.180 Transport Address: 10.0.0.2 [2024-07-25 12:11:25.389952] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:48.180 [2024-07-25 12:11:25.389966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aae40) on tqpair=0x2027ec0 00:24:48.180 [2024-07-25 12:11:25.389974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.180 [2024-07-25 12:11:25.389980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20aafc0) on tqpair=0x2027ec0 00:24:48.180 [2024-07-25 12:11:25.389986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.180 [2024-07-25 12:11:25.389992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab140) on tqpair=0x2027ec0 00:24:48.180 [2024-07-25 12:11:25.389998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.180 [2024-07-25 12:11:25.390004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab2c0) on tqpair=0x2027ec0 00:24:48.180 [2024-07-25 12:11:25.390010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.180 [2024-07-25 12:11:25.390023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.180 [2024-07-25 12:11:25.390029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.180 [2024-07-25 12:11:25.390033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2027ec0) 00:24:48.180 [2024-07-25 12:11:25.390042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.180 [2024-07-25 12:11:25.390060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab2c0, cid 3, qid 0 00:24:48.180 [2024-07-25 12:11:25.390210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.180 [2024-07-25 12:11:25.390218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.180 [2024-07-25 12:11:25.390222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.180 [2024-07-25 12:11:25.390227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab2c0) on tqpair=0x2027ec0 00:24:48.180 [2024-07-25 12:11:25.390237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.180 [2024-07-25 12:11:25.390244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.180 [2024-07-25 12:11:25.390249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2027ec0) 00:24:48.180 [2024-07-25 12:11:25.390257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.180 [2024-07-25 12:11:25.390276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab2c0, cid 3, qid 0 00:24:48.181 [2024-07-25 12:11:25.390460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.181 [2024-07-25 12:11:25.390467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.181 [2024-07-25 12:11:25.390472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.181 [2024-07-25 12:11:25.390476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab2c0) on tqpair=0x2027ec0 00:24:48.181 [2024-07-25 12:11:25.390482] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:48.181 [2024-07-25 12:11:25.390488] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:48.181 [2024-07-25 12:11:25.390500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.181 [2024-07-25 12:11:25.390505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.181 [2024-07-25 12:11:25.390510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2027ec0) 00:24:48.181 [2024-07-25 12:11:25.390518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.181 [2024-07-25 12:11:25.390531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab2c0, cid 3, qid 0 00:24:48.181 [2024-07-25 12:11:25.394610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.181 [2024-07-25 12:11:25.394621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.181 [2024-07-25 12:11:25.394626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.181 [2024-07-25 12:11:25.394630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab2c0) on tqpair=0x2027ec0 00:24:48.181 [2024-07-25 12:11:25.394644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.181 [2024-07-25 12:11:25.394650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.181 [2024-07-25 12:11:25.394655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2027ec0) 00:24:48.181 [2024-07-25 12:11:25.394663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.181 [2024-07-25 12:11:25.394678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ab2c0, cid 3, qid 0 00:24:48.181 [2024-07-25 12:11:25.394936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.181 [2024-07-25 12:11:25.394944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.181 [2024-07-25 12:11:25.394948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.181 [2024-07-25 12:11:25.394953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ab2c0) on tqpair=0x2027ec0 00:24:48.181 [2024-07-25 12:11:25.394963] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:24:48.181 00:24:48.181 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:48.181 [2024-07-25 12:11:25.437737] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:48.181 [2024-07-25 12:11:25.437773] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32441 ] 00:24:48.181 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.181 [2024-07-25 12:11:25.475832] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:48.181 [2024-07-25 12:11:25.475883] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:48.181 [2024-07-25 12:11:25.475890] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:48.181 [2024-07-25 12:11:25.475905] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:48.181 [2024-07-25 12:11:25.475914] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:48.443 [2024-07-25 12:11:25.476245] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:48.443 [2024-07-25 12:11:25.476274] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf08ec0 0 00:24:48.443 [2024-07-25 12:11:25.490614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:48.443 [2024-07-25 12:11:25.490633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:48.443 [2024-07-25 12:11:25.490640] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:48.443 [2024-07-25 12:11:25.490645] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:48.443 [2024-07-25 12:11:25.490685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.490692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.490697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.443 [2024-07-25 12:11:25.490712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:48.443 [2024-07-25 12:11:25.490732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.443 [2024-07-25 12:11:25.497616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.443 [2024-07-25 12:11:25.497628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.443 [2024-07-25 12:11:25.497633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.497638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.443 [2024-07-25 12:11:25.497653] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:48.443 [2024-07-25 12:11:25.497661] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:48.443 [2024-07-25 12:11:25.497668] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:48.443 [2024-07-25 12:11:25.497683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.497689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.497694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.443 [2024-07-25 12:11:25.497704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.443 [2024-07-25 12:11:25.497721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.443 [2024-07-25 12:11:25.497997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.443 [2024-07-25 12:11:25.498006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.443 [2024-07-25 12:11:25.498011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.443 [2024-07-25 12:11:25.498025] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:48.443 [2024-07-25 12:11:25.498039] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:48.443 [2024-07-25 12:11:25.498049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.443 [2024-07-25 12:11:25.498069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.443 [2024-07-25 12:11:25.498084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.443 [2024-07-25 12:11:25.498279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.443 [2024-07-25 12:11:25.498288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.443 [2024-07-25 12:11:25.498292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.443 [2024-07-25 12:11:25.498303] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:48.443 [2024-07-25 12:11:25.498314] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:48.443 [2024-07-25 12:11:25.498323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.443 [2024-07-25 12:11:25.498341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.443 [2024-07-25 12:11:25.498354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.443 [2024-07-25 12:11:25.498515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.443 [2024-07-25 12:11:25.498524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.443 [2024-07-25 12:11:25.498529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.443 [2024-07-25 12:11:25.498540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:48.443 [2024-07-25 12:11:25.498552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.443 [2024-07-25 12:11:25.498571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.443 [2024-07-25 12:11:25.498584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.443 [2024-07-25 12:11:25.498743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.443 [2024-07-25 12:11:25.498752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.443 [2024-07-25 12:11:25.498756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.443 [2024-07-25 12:11:25.498767] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:48.443 [2024-07-25 12:11:25.498773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:48.443 [2024-07-25 12:11:25.498784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:48.443 [2024-07-25 12:11:25.498893] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:48.443 [2024-07-25 12:11:25.498899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:48.443 [2024-07-25 12:11:25.498909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.498919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.443 [2024-07-25 12:11:25.498927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.443 [2024-07-25 12:11:25.498942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.443 [2024-07-25 12:11:25.499137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.443 [2024-07-25 12:11:25.499145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.443 [2024-07-25 12:11:25.499149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.443 [2024-07-25 12:11:25.499154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.499160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:48.444 [2024-07-25 12:11:25.499173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.499178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.499183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.499191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.444 [2024-07-25 12:11:25.499205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.444 [2024-07-25 12:11:25.499370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.499379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.499384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.499389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.499394] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:48.444 [2024-07-25 12:11:25.499400] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.499410] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:48.444 [2024-07-25 12:11:25.499420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.499432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.499437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.499445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.444 [2024-07-25 12:11:25.499459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.444 [2024-07-25 12:11:25.499682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.444 [2024-07-25 12:11:25.499692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.444 [2024-07-25 12:11:25.499697] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.499702] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=4096, cccid=0 00:24:48.444 [2024-07-25 12:11:25.499710] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8be40) on tqpair(0xf08ec0): expected_datao=0, payload_size=4096 00:24:48.444 [2024-07-25 12:11:25.499716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.499763] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.499768] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.540800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.540816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.540820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.540825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.540835] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:48.444 [2024-07-25 12:11:25.540841] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:48.444 [2024-07-25 12:11:25.540846] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:48.444 [2024-07-25 12:11:25.540852] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:48.444 [2024-07-25 12:11:25.540858] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:48.444 [2024-07-25 12:11:25.540864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.540875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.540889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.540894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.540899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.540909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:48.444 [2024-07-25 12:11:25.540925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.444 [2024-07-25 12:11:25.541081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.541090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.541094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.541108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.541124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.444 [2024-07-25 12:11:25.541132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.541149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.444 [2024-07-25 12:11:25.541156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.541175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.444 [2024-07-25 12:11:25.541183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.541199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.444 [2024-07-25 12:11:25.541204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.541218] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.541226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.541239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.444 [2024-07-25 12:11:25.541254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8be40, cid 0, qid 0 00:24:48.444 [2024-07-25 12:11:25.541261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8bfc0, cid 1, qid 0 00:24:48.444 [2024-07-25 12:11:25.541267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c140, cid 2, qid 0 00:24:48.444 [2024-07-25 12:11:25.541273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.444 [2024-07-25 12:11:25.541279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c440, cid 4, qid 0 00:24:48.444 [2024-07-25 12:11:25.541581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.541590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.541594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.541599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c440) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.545457] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:48.444 [2024-07-25 12:11:25.545466] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.545481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.545490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.545498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.545503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.545508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.545517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:48.444 [2024-07-25 12:11:25.545533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c440, cid 4, qid 0 00:24:48.444 [2024-07-25 12:11:25.545777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.545786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.545791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.545796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c440) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.545877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.545890] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.545900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.545904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.545913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.444 [2024-07-25 12:11:25.545929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c440, cid 4, qid 0 00:24:48.444 [2024-07-25 12:11:25.546122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.444 [2024-07-25 12:11:25.546131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.444 [2024-07-25 12:11:25.546135] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546140] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=4096, cccid=4 00:24:48.444 [2024-07-25 12:11:25.546145] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8c440) on tqpair(0xf08ec0): expected_datao=0, payload_size=4096 00:24:48.444 [2024-07-25 12:11:25.546151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546159] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546164] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.546216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.546221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c440) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.546237] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:48.444 [2024-07-25 12:11:25.546253] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.546265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.546275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.546288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.444 [2024-07-25 12:11:25.546303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c440, cid 4, qid 0 00:24:48.444 [2024-07-25 12:11:25.546497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.444 [2024-07-25 12:11:25.546505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.444 [2024-07-25 12:11:25.546510] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546514] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=4096, cccid=4 00:24:48.444 [2024-07-25 12:11:25.546520] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8c440) on tqpair(0xf08ec0): expected_datao=0, payload_size=4096 00:24:48.444 [2024-07-25 12:11:25.546525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546534] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546538] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.546599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.546611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c440) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.546632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.546644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.546654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.546667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.444 [2024-07-25 12:11:25.546682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c440, cid 4, qid 0 00:24:48.444 [2024-07-25 12:11:25.546865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.444 [2024-07-25 12:11:25.546874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.444 [2024-07-25 12:11:25.546879] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546883] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=4096, cccid=4 00:24:48.444 [2024-07-25 12:11:25.546888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8c440) on tqpair(0xf08ec0): expected_datao=0, payload_size=4096 00:24:48.444 [2024-07-25 12:11:25.546894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546902] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546907] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.546959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.546963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.546968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c440) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.546977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.546987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.546997] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.547006] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.547013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.547020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.547026] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:48.444 [2024-07-25 12:11:25.547032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:48.444 [2024-07-25 12:11:25.547038] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:48.444 [2024-07-25 12:11:25.547054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.547060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.547071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.444 [2024-07-25 12:11:25.547080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.547084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.547089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf08ec0) 00:24:48.444 [2024-07-25 12:11:25.547096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.444 [2024-07-25 12:11:25.547114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c440, cid 4, qid 0 00:24:48.444 [2024-07-25 12:11:25.547120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c5c0, cid 5, qid 0 00:24:48.444 [2024-07-25 12:11:25.547331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.547339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.547344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.547348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c440) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.547357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.444 [2024-07-25 12:11:25.547364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.444 [2024-07-25 12:11:25.547368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.547373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c5c0) on tqpair=0xf08ec0 00:24:48.444 [2024-07-25 12:11:25.547385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.444 [2024-07-25 12:11:25.547390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf08ec0) 00:24:48.445 [2024-07-25 12:11:25.547398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.445 [2024-07-25 12:11:25.547412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c5c0, cid 5, qid 0 00:24:48.445 [2024-07-25 12:11:25.547617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.445 [2024-07-25 12:11:25.547626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.445 [2024-07-25 12:11:25.547630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.547634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c5c0) on tqpair=0xf08ec0 00:24:48.445 [2024-07-25 12:11:25.547646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.547651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf08ec0) 00:24:48.445 [2024-07-25 12:11:25.547660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.445 [2024-07-25 12:11:25.547674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c5c0, cid 5, qid 0 00:24:48.445 [2024-07-25 12:11:25.547873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.445 [2024-07-25 12:11:25.547881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.445 [2024-07-25 12:11:25.547886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.547891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c5c0) on tqpair=0xf08ec0 00:24:48.445 [2024-07-25 12:11:25.547903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.547908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf08ec0) 00:24:48.445 [2024-07-25 12:11:25.547916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.445 [2024-07-25 12:11:25.547929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c5c0, cid 5, qid 0 00:24:48.445 [2024-07-25 12:11:25.548095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.445 [2024-07-25 12:11:25.548104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.445 [2024-07-25 12:11:25.548108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.548113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c5c0) on tqpair=0xf08ec0 00:24:48.445 [2024-07-25 12:11:25.548131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.548137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf08ec0) 00:24:48.445 [2024-07-25 12:11:25.548145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.445 [2024-07-25 12:11:25.548154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.548159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf08ec0) 00:24:48.445 [2024-07-25 12:11:25.548166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.445 [2024-07-25 12:11:25.548175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.548180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf08ec0) 00:24:48.445 [2024-07-25 12:11:25.548188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.445 [2024-07-25 12:11:25.548197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.548202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf08ec0) 00:24:48.445 [2024-07-25 12:11:25.548209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.445 [2024-07-25 12:11:25.548224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c5c0, cid 5, qid 0 00:24:48.445 [2024-07-25 12:11:25.548231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c440, cid 4, qid 0 00:24:48.445 [2024-07-25 12:11:25.548237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c740, cid 6, qid 0 00:24:48.445 [2024-07-25 12:11:25.548243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c8c0, cid 7, qid 0 00:24:48.445 [2024-07-25 12:11:25.552621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.445 [2024-07-25 12:11:25.552631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.445 [2024-07-25 12:11:25.552636] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552640] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=8192, cccid=5 00:24:48.445 [2024-07-25 12:11:25.552646] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8c5c0) on tqpair(0xf08ec0): expected_datao=0, payload_size=8192 00:24:48.445 [2024-07-25 12:11:25.552651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552660] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552665] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.445 [2024-07-25 12:11:25.552679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.445 [2024-07-25 12:11:25.552683] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552688] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=512, cccid=4 00:24:48.445 [2024-07-25 12:11:25.552693] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8c440) on tqpair(0xf08ec0): expected_datao=0, payload_size=512 00:24:48.445 [2024-07-25 12:11:25.552699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552710] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552714] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.445 [2024-07-25 12:11:25.552728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.445 [2024-07-25 12:11:25.552733] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552737] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=512, cccid=6 00:24:48.445 [2024-07-25 12:11:25.552743] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8c740) on tqpair(0xf08ec0): expected_datao=0, payload_size=512 00:24:48.445 [2024-07-25 12:11:25.552748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552756] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552760] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.445 [2024-07-25 12:11:25.552774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.445 [2024-07-25 12:11:25.552779] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552783] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf08ec0): datao=0, datal=4096, cccid=7 00:24:48.445 [2024-07-25 12:11:25.552789] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8c8c0) on tqpair(0xf08ec0): expected_datao=0, payload_size=4096 00:24:48.445 [2024-07-25 12:11:25.552794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552802] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552806] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.445 [2024-07-25 12:11:25.552820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.445 [2024-07-25 12:11:25.552824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c5c0) on tqpair=0xf08ec0 00:24:48.445 [2024-07-25 12:11:25.552844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.445 [2024-07-25 12:11:25.552852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.445 [2024-07-25 12:11:25.552856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c440) on tqpair=0xf08ec0 00:24:48.445 [2024-07-25 12:11:25.552873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.445 [2024-07-25 12:11:25.552880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.445 [2024-07-25 12:11:25.552885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c740) on tqpair=0xf08ec0 00:24:48.445 [2024-07-25 12:11:25.552898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.445 [2024-07-25 12:11:25.552906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.445 [2024-07-25 12:11:25.552910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.445 [2024-07-25 12:11:25.552914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c8c0) on tqpair=0xf08ec0 00:24:48.445 ===================================================== 00:24:48.445 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.445 ===================================================== 00:24:48.445 Controller Capabilities/Features 00:24:48.445 ================================ 00:24:48.445 Vendor ID: 8086 00:24:48.445 Subsystem Vendor ID: 8086 00:24:48.445 Serial Number: SPDK00000000000001 00:24:48.445 Model Number: SPDK bdev Controller 00:24:48.445 Firmware Version: 24.09 00:24:48.445 Recommended Arb Burst: 6 00:24:48.445 IEEE OUI Identifier: e4 d2 5c 00:24:48.445 Multi-path I/O 00:24:48.445 May have multiple subsystem ports: Yes 00:24:48.445 May have multiple controllers: Yes 00:24:48.445 Associated with SR-IOV VF: No 00:24:48.445 Max Data Transfer Size: 131072 00:24:48.445 Max Number of Namespaces: 32 00:24:48.445 Max Number of I/O Queues: 127 00:24:48.445 NVMe Specification Version (VS): 1.3 00:24:48.445 NVMe Specification Version (Identify): 1.3 00:24:48.445 Maximum Queue Entries: 128 00:24:48.445 Contiguous Queues Required: Yes 00:24:48.445 Arbitration Mechanisms Supported 00:24:48.445 Weighted Round Robin: Not Supported 00:24:48.445 Vendor Specific: Not Supported 00:24:48.445 Reset Timeout: 15000 ms 00:24:48.445 Doorbell Stride: 4 bytes 00:24:48.445 NVM Subsystem Reset: Not Supported 00:24:48.445 Command Sets Supported 00:24:48.445 NVM Command Set: Supported 00:24:48.445 Boot Partition: Not Supported 00:24:48.445 Memory Page Size Minimum: 4096 bytes 00:24:48.445 Memory Page Size Maximum: 4096 bytes 00:24:48.445 Persistent Memory Region: Not Supported 00:24:48.445 Optional Asynchronous Events Supported 00:24:48.445 Namespace Attribute Notices: Supported 00:24:48.445 Firmware Activation Notices: Not Supported 00:24:48.445 ANA Change Notices: Not Supported 00:24:48.445 PLE Aggregate Log Change Notices: Not Supported 00:24:48.445 LBA Status Info Alert Notices: Not Supported 00:24:48.445 EGE Aggregate Log Change Notices: Not Supported 00:24:48.445 Normal NVM Subsystem Shutdown event: Not Supported 00:24:48.445 Zone Descriptor Change Notices: Not Supported 00:24:48.445 Discovery Log Change Notices: Not Supported 00:24:48.445 Controller Attributes 00:24:48.445 128-bit Host Identifier: Supported 00:24:48.445 Non-Operational Permissive Mode: Not Supported 00:24:48.445 NVM Sets: Not Supported 00:24:48.445 Read Recovery Levels: Not Supported 00:24:48.445 Endurance Groups: Not Supported 00:24:48.445 Predictable Latency Mode: Not Supported 00:24:48.445 Traffic Based Keep ALive: Not Supported 00:24:48.445 Namespace Granularity: Not Supported 00:24:48.445 SQ Associations: Not Supported 00:24:48.445 UUID List: Not Supported 00:24:48.445 Multi-Domain Subsystem: Not Supported 00:24:48.445 Fixed Capacity Management: Not Supported 00:24:48.445 Variable Capacity Management: Not Supported 00:24:48.445 Delete Endurance Group: Not Supported 00:24:48.445 Delete NVM Set: Not Supported 00:24:48.445 Extended LBA Formats Supported: Not Supported 00:24:48.445 Flexible Data Placement Supported: Not Supported 00:24:48.445 00:24:48.445 Controller Memory Buffer Support 00:24:48.445 ================================ 00:24:48.445 Supported: No 00:24:48.445 00:24:48.445 Persistent Memory Region Support 00:24:48.445 ================================ 00:24:48.445 Supported: No 00:24:48.445 00:24:48.445 Admin Command Set Attributes 00:24:48.445 ============================ 00:24:48.445 Security Send/Receive: Not Supported 00:24:48.445 Format NVM: Not Supported 00:24:48.445 Firmware Activate/Download: Not Supported 00:24:48.445 Namespace Management: Not Supported 00:24:48.445 Device Self-Test: Not Supported 00:24:48.445 Directives: Not Supported 00:24:48.445 NVMe-MI: Not Supported 00:24:48.445 Virtualization Management: Not Supported 00:24:48.445 Doorbell Buffer Config: Not Supported 00:24:48.445 Get LBA Status Capability: Not Supported 00:24:48.445 Command & Feature Lockdown Capability: Not Supported 00:24:48.445 Abort Command Limit: 4 00:24:48.445 Async Event Request Limit: 4 00:24:48.445 Number of Firmware Slots: N/A 00:24:48.445 Firmware Slot 1 Read-Only: N/A 00:24:48.445 Firmware Activation Without Reset: N/A 00:24:48.445 Multiple Update Detection Support: N/A 00:24:48.445 Firmware Update Granularity: No Information Provided 00:24:48.445 Per-Namespace SMART Log: No 00:24:48.445 Asymmetric Namespace Access Log Page: Not Supported 00:24:48.445 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:48.445 Command Effects Log Page: Supported 00:24:48.445 Get Log Page Extended Data: Supported 00:24:48.445 Telemetry Log Pages: Not Supported 00:24:48.445 Persistent Event Log Pages: Not Supported 00:24:48.445 Supported Log Pages Log Page: May Support 00:24:48.445 Commands Supported & Effects Log Page: Not Supported 00:24:48.445 Feature Identifiers & Effects Log Page:May Support 00:24:48.445 NVMe-MI Commands & Effects Log Page: May Support 00:24:48.445 Data Area 4 for Telemetry Log: Not Supported 00:24:48.445 Error Log Page Entries Supported: 128 00:24:48.445 Keep Alive: Supported 00:24:48.445 Keep Alive Granularity: 10000 ms 00:24:48.445 00:24:48.445 NVM Command Set Attributes 00:24:48.445 ========================== 00:24:48.445 Submission Queue Entry Size 00:24:48.445 Max: 64 00:24:48.445 Min: 64 00:24:48.445 Completion Queue Entry Size 00:24:48.445 Max: 16 00:24:48.445 Min: 16 00:24:48.445 Number of Namespaces: 32 00:24:48.445 Compare Command: Supported 00:24:48.445 Write Uncorrectable Command: Not Supported 00:24:48.445 Dataset Management Command: Supported 00:24:48.445 Write Zeroes Command: Supported 00:24:48.445 Set Features Save Field: Not Supported 00:24:48.445 Reservations: Supported 00:24:48.445 Timestamp: Not Supported 00:24:48.445 Copy: Supported 00:24:48.445 Volatile Write Cache: Present 00:24:48.445 Atomic Write Unit (Normal): 1 00:24:48.445 Atomic Write Unit (PFail): 1 00:24:48.445 Atomic Compare & Write Unit: 1 00:24:48.445 Fused Compare & Write: Supported 00:24:48.445 Scatter-Gather List 00:24:48.445 SGL Command Set: Supported 00:24:48.445 SGL Keyed: Supported 00:24:48.445 SGL Bit Bucket Descriptor: Not Supported 00:24:48.445 SGL Metadata Pointer: Not Supported 00:24:48.445 Oversized SGL: Not Supported 00:24:48.445 SGL Metadata Address: Not Supported 00:24:48.445 SGL Offset: Supported 00:24:48.445 Transport SGL Data Block: Not Supported 00:24:48.445 Replay Protected Memory Block: Not Supported 00:24:48.445 00:24:48.445 Firmware Slot Information 00:24:48.445 ========================= 00:24:48.445 Active slot: 1 00:24:48.445 Slot 1 Firmware Revision: 24.09 00:24:48.445 00:24:48.445 00:24:48.445 Commands Supported and Effects 00:24:48.445 ============================== 00:24:48.445 Admin Commands 00:24:48.445 -------------- 00:24:48.445 Get Log Page (02h): Supported 00:24:48.445 Identify (06h): Supported 00:24:48.445 Abort (08h): Supported 00:24:48.445 Set Features (09h): Supported 00:24:48.445 Get Features (0Ah): Supported 00:24:48.445 Asynchronous Event Request (0Ch): Supported 00:24:48.445 Keep Alive (18h): Supported 00:24:48.445 I/O Commands 00:24:48.445 ------------ 00:24:48.445 Flush (00h): Supported LBA-Change 00:24:48.445 Write (01h): Supported LBA-Change 00:24:48.445 Read (02h): Supported 00:24:48.445 Compare (05h): Supported 00:24:48.445 Write Zeroes (08h): Supported LBA-Change 00:24:48.445 Dataset Management (09h): Supported LBA-Change 00:24:48.445 Copy (19h): Supported LBA-Change 00:24:48.445 00:24:48.445 Error Log 00:24:48.445 ========= 00:24:48.445 00:24:48.445 Arbitration 00:24:48.445 =========== 00:24:48.445 Arbitration Burst: 1 00:24:48.445 00:24:48.445 Power Management 00:24:48.445 ================ 00:24:48.445 Number of Power States: 1 00:24:48.445 Current Power State: Power State #0 00:24:48.445 Power State #0: 00:24:48.445 Max Power: 0.00 W 00:24:48.445 Non-Operational State: Operational 00:24:48.445 Entry Latency: Not Reported 00:24:48.445 Exit Latency: Not Reported 00:24:48.445 Relative Read Throughput: 0 00:24:48.445 Relative Read Latency: 0 00:24:48.445 Relative Write Throughput: 0 00:24:48.445 Relative Write Latency: 0 00:24:48.445 Idle Power: Not Reported 00:24:48.445 Active Power: Not Reported 00:24:48.445 Non-Operational Permissive Mode: Not Supported 00:24:48.445 00:24:48.445 Health Information 00:24:48.445 ================== 00:24:48.445 Critical Warnings: 00:24:48.446 Available Spare Space: OK 00:24:48.446 Temperature: OK 00:24:48.446 Device Reliability: OK 00:24:48.446 Read Only: No 00:24:48.446 Volatile Memory Backup: OK 00:24:48.446 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:48.446 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:48.446 Available Spare: 0% 00:24:48.446 Available Spare Threshold: 0% 00:24:48.446 Life Percentage Used:[2024-07-25 12:11:25.553031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.553037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.553046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.553063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c8c0, cid 7, qid 0 00:24:48.446 [2024-07-25 12:11:25.553388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.553398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.553402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.553407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c8c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.553445] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:48.446 [2024-07-25 12:11:25.553457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8be40) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.553465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.446 [2024-07-25 12:11:25.553471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8bfc0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.553477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.446 [2024-07-25 12:11:25.553483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c140) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.553489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.446 [2024-07-25 12:11:25.553495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.553501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.446 [2024-07-25 12:11:25.553510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.553515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.553519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.553528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.553544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.553709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.553718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.553723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.553728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.553737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.553741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.553746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.553755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.553773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.553996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.554004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.554008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.554018] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:48.446 [2024-07-25 12:11:25.554024] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:48.446 [2024-07-25 12:11:25.554036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.554057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.554070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.554248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.554256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.554260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.554278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554283] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.554296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.554309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.554468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.554476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.554480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.554497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.554515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.554529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.554700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.554708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.554713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.554730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.554748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.554761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.554922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.554930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.554934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.554951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.554961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.554971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.554985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.555138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.555146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.555151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.555166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.555184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.555197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.555361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.555369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.555374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.555391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.555409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.555423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.555612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.555621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.555625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.555642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.555660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.555674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.555835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.555843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.555848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.555865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.555874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.555882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.555898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.556054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.556062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.556066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.556083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.556100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.556113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.556262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.556270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.556274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.556290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.556308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.556321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.556507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.556515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.556519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.556536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.556546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.556554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.556567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.560613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.560626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.560631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.560636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.560650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.560655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.560659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf08ec0) 00:24:48.446 [2024-07-25 12:11:25.560668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.446 [2024-07-25 12:11:25.560684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8c2c0, cid 3, qid 0 00:24:48.446 [2024-07-25 12:11:25.560912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.446 [2024-07-25 12:11:25.560921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.446 [2024-07-25 12:11:25.560926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.446 [2024-07-25 12:11:25.560931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8c2c0) on tqpair=0xf08ec0 00:24:48.446 [2024-07-25 12:11:25.560941] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:48.446 0% 00:24:48.446 Data Units Read: 0 00:24:48.446 Data Units Written: 0 00:24:48.446 Host Read Commands: 0 00:24:48.446 Host Write Commands: 0 00:24:48.447 Controller Busy Time: 0 minutes 00:24:48.447 Power Cycles: 0 00:24:48.447 Power On Hours: 0 hours 00:24:48.447 Unsafe Shutdowns: 0 00:24:48.447 Unrecoverable Media Errors: 0 00:24:48.447 Lifetime Error Log Entries: 0 00:24:48.447 Warning Temperature Time: 0 minutes 00:24:48.447 Critical Temperature Time: 0 minutes 00:24:48.447 00:24:48.447 Number of Queues 00:24:48.447 ================ 00:24:48.447 Number of I/O Submission Queues: 127 00:24:48.447 Number of I/O Completion Queues: 127 00:24:48.447 00:24:48.447 Active Namespaces 00:24:48.447 ================= 00:24:48.447 Namespace ID:1 00:24:48.447 Error Recovery Timeout: Unlimited 00:24:48.447 Command Set Identifier: NVM (00h) 00:24:48.447 Deallocate: Supported 00:24:48.447 Deallocated/Unwritten Error: Not Supported 00:24:48.447 Deallocated Read Value: Unknown 00:24:48.447 Deallocate in Write Zeroes: Not Supported 00:24:48.447 Deallocated Guard Field: 0xFFFF 00:24:48.447 Flush: Supported 00:24:48.447 Reservation: Supported 00:24:48.447 Namespace Sharing Capabilities: Multiple Controllers 00:24:48.447 Size (in LBAs): 131072 (0GiB) 00:24:48.447 Capacity (in LBAs): 131072 (0GiB) 00:24:48.447 Utilization (in LBAs): 131072 (0GiB) 00:24:48.447 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:48.447 EUI64: ABCDEF0123456789 00:24:48.447 UUID: 7acb017a-0b77-404a-864f-ed16235d14b8 00:24:48.447 Thin Provisioning: Not Supported 00:24:48.447 Per-NS Atomic Units: Yes 00:24:48.447 Atomic Boundary Size (Normal): 0 00:24:48.447 Atomic Boundary Size (PFail): 0 00:24:48.447 Atomic Boundary Offset: 0 00:24:48.447 Maximum Single Source Range Length: 65535 00:24:48.447 Maximum Copy Length: 65535 00:24:48.447 Maximum Source Range Count: 1 00:24:48.447 NGUID/EUI64 Never Reused: No 00:24:48.447 Namespace Write Protected: No 00:24:48.447 Number of LBA Formats: 1 00:24:48.447 Current LBA Format: LBA Format #00 00:24:48.447 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:48.447 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:48.447 rmmod nvme_tcp 00:24:48.447 rmmod nvme_fabrics 00:24:48.447 rmmod nvme_keyring 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 32152 ']' 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 32152 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 32152 ']' 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 32152 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 32152 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 32152' 00:24:48.447 killing process with pid 32152 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 32152 00:24:48.447 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 32152 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.705 12:11:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:51.281 00:24:51.281 real 0m9.954s 00:24:51.281 user 0m8.502s 00:24:51.281 sys 0m4.829s 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.281 ************************************ 00:24:51.281 END TEST nvmf_identify 00:24:51.281 ************************************ 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.281 ************************************ 00:24:51.281 START TEST nvmf_perf 00:24:51.281 ************************************ 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:51.281 * Looking for test storage... 00:24:51.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.281 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.282 12:11:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:56.556 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:56.556 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:56.556 Found net devices under 0000:af:00.0: cvl_0_0 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:56.556 Found net devices under 0000:af:00.1: cvl_0_1 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.556 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.814 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.814 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.814 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.814 12:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:24:56.814 00:24:56.814 --- 10.0.0.2 ping statistics --- 00:24:56.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.814 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:24:56.814 00:24:56.814 --- 10.0.0.1 ping statistics --- 00:24:56.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.814 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.814 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=36145 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 36145 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 36145 ']' 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.072 12:11:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.072 [2024-07-25 12:11:34.172713] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:57.072 [2024-07-25 12:11:34.172768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.072 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.072 [2024-07-25 12:11:34.258865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.072 [2024-07-25 12:11:34.350707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.072 [2024-07-25 12:11:34.350749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.072 [2024-07-25 12:11:34.350759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.072 [2024-07-25 12:11:34.350768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.072 [2024-07-25 12:11:34.350775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.072 [2024-07-25 12:11:34.350834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.072 [2024-07-25 12:11:34.350947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.072 [2024-07-25 12:11:34.350977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.072 [2024-07-25 12:11:34.350977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:58.007 12:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:01.296 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:01.296 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:01.296 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:25:01.296 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:01.555 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:01.555 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:25:01.555 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:01.555 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:01.555 12:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:01.813 [2024-07-25 12:11:39.055193] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.813 12:11:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.071 12:11:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:02.071 12:11:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.329 12:11:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:02.329 12:11:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:02.587 12:11:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.846 [2024-07-25 12:11:39.936331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.846 12:11:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:02.846 12:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:25:02.846 12:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:02.846 12:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:02.846 12:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:04.221 Initializing NVMe Controllers 00:25:04.221 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:25:04.221 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:25:04.221 Initialization complete. Launching workers. 00:25:04.221 ======================================================== 00:25:04.221 Latency(us) 00:25:04.221 Device Information : IOPS MiB/s Average min max 00:25:04.221 PCIE (0000:86:00.0) NSID 1 from core 0: 68796.55 268.74 464.38 14.18 5293.56 00:25:04.221 ======================================================== 00:25:04.221 Total : 68796.55 268.74 464.38 14.18 5293.56 00:25:04.221 00:25:04.221 12:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:04.221 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.598 Initializing NVMe Controllers 00:25:05.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:05.598 Initialization complete. Launching workers. 00:25:05.598 ======================================================== 00:25:05.598 Latency(us) 00:25:05.598 Device Information : IOPS MiB/s Average min max 00:25:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.00 0.18 21759.78 278.96 45583.98 00:25:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.00 0.25 16084.85 6007.43 47888.46 00:25:05.598 ======================================================== 00:25:05.598 Total : 111.00 0.43 18487.75 278.96 47888.46 00:25:05.598 00:25:05.598 12:11:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.598 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.975 Initializing NVMe Controllers 00:25:06.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:06.975 Initialization complete. Launching workers. 00:25:06.975 ======================================================== 00:25:06.975 Latency(us) 00:25:06.975 Device Information : IOPS MiB/s Average min max 00:25:06.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4318.00 16.87 7453.44 923.17 12654.89 00:25:06.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3839.00 15.00 8381.59 6614.32 16063.66 00:25:06.975 ======================================================== 00:25:06.975 Total : 8157.00 31.86 7890.27 923.17 16063.66 00:25:06.975 00:25:06.975 12:11:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:06.975 12:11:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:06.975 12:11:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.233 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.772 Initializing NVMe Controllers 00:25:09.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.772 Controller IO queue size 128, less than required. 00:25:09.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.772 Controller IO queue size 128, less than required. 00:25:09.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:09.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:09.772 Initialization complete. Launching workers. 00:25:09.772 ======================================================== 00:25:09.772 Latency(us) 00:25:09.772 Device Information : IOPS MiB/s Average min max 00:25:09.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 987.82 246.96 132822.51 75069.04 213951.46 00:25:09.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.23 143.56 231196.84 65412.93 362528.44 00:25:09.772 ======================================================== 00:25:09.772 Total : 1562.06 390.51 168986.31 65412.93 362528.44 00:25:09.772 00:25:09.773 12:11:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:09.773 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.773 No valid NVMe controllers or AIO or URING devices found 00:25:09.773 Initializing NVMe Controllers 00:25:09.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.773 Controller IO queue size 128, less than required. 00:25:09.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.773 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:09.773 Controller IO queue size 128, less than required. 00:25:09.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.773 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:09.773 WARNING: Some requested NVMe devices were skipped 00:25:09.773 12:11:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:09.773 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.306 Initializing NVMe Controllers 00:25:12.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.306 Controller IO queue size 128, less than required. 00:25:12.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.306 Controller IO queue size 128, less than required. 00:25:12.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:12.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:12.306 Initialization complete. Launching workers. 00:25:12.306 00:25:12.306 ==================== 00:25:12.306 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:12.306 TCP transport: 00:25:12.306 polls: 18547 00:25:12.306 idle_polls: 7505 00:25:12.306 sock_completions: 11042 00:25:12.306 nvme_completions: 4157 00:25:12.306 submitted_requests: 6270 00:25:12.306 queued_requests: 1 00:25:12.306 00:25:12.306 ==================== 00:25:12.306 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:12.306 TCP transport: 00:25:12.306 polls: 20307 00:25:12.306 idle_polls: 9032 00:25:12.306 sock_completions: 11275 00:25:12.306 nvme_completions: 4309 00:25:12.306 submitted_requests: 6498 00:25:12.306 queued_requests: 1 00:25:12.306 ======================================================== 00:25:12.306 Latency(us) 00:25:12.306 Device Information : IOPS MiB/s Average min max 00:25:12.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1037.54 259.39 128106.46 71799.38 217383.67 00:25:12.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1075.49 268.87 120676.72 54780.71 175524.28 00:25:12.306 ======================================================== 00:25:12.306 Total : 2113.03 528.26 124324.88 54780.71 217383.67 00:25:12.306 00:25:12.306 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:12.307 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.565 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.565 rmmod nvme_tcp 00:25:12.565 rmmod nvme_fabrics 00:25:12.565 rmmod nvme_keyring 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 36145 ']' 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 36145 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 36145 ']' 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 36145 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 36145 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 36145' 00:25:12.824 killing process with pid 36145 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 36145 00:25:12.824 12:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 36145 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.729 12:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.633 00:25:16.633 real 0m25.498s 00:25:16.633 user 1m9.290s 00:25:16.633 sys 0m7.558s 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:16.633 ************************************ 00:25:16.633 END TEST nvmf_perf 00:25:16.633 ************************************ 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.633 ************************************ 00:25:16.633 START TEST nvmf_fio_host 00:25:16.633 ************************************ 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:16.633 * Looking for test storage... 00:25:16.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.633 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.634 12:11:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:23.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:23.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:23.198 Found net devices under 0000:af:00.0: cvl_0_0 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:23.198 Found net devices under 0000:af:00.1: cvl_0_1 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:23.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:23.198 00:25:23.198 --- 10.0.0.2 ping statistics --- 00:25:23.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.198 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:25:23.198 00:25:23.198 --- 10.0.0.1 ping statistics --- 00:25:23.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.198 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:23.198 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=42820 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 42820 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 42820 ']' 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.199 12:11:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.199 [2024-07-25 12:11:59.799205] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:25:23.199 [2024-07-25 12:11:59.799246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.199 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.199 [2024-07-25 12:11:59.872959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.199 [2024-07-25 12:11:59.962924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.199 [2024-07-25 12:11:59.962969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.199 [2024-07-25 12:11:59.962979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.199 [2024-07-25 12:11:59.962988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.199 [2024-07-25 12:11:59.962996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.199 [2024-07-25 12:11:59.963049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.199 [2024-07-25 12:11:59.963161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.199 [2024-07-25 12:11:59.963191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.199 [2024-07-25 12:11:59.963192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.457 12:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:23.457 12:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:25:23.457 12:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:23.715 [2024-07-25 12:12:00.898991] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.715 12:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:23.715 12:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:23.715 12:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.715 12:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:23.973 Malloc1 00:25:23.973 12:12:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:24.231 12:12:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:24.490 12:12:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.748 [2024-07-25 12:12:01.922889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.748 12:12:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:25.007 12:12:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:25.265 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:25.265 fio-3.35 00:25:25.265 Starting 1 thread 00:25:25.265 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.827 00:25:27.827 test: (groupid=0, jobs=1): err= 0: pid=43622: Thu Jul 25 12:12:04 2024 00:25:27.827 read: IOPS=3739, BW=14.6MiB/s (15.3MB/s)(29.4MiB/2016msec) 00:25:27.827 slat (usec): min=2, max=243, avg= 2.65, stdev= 3.96 00:25:27.827 clat (usec): min=5165, max=30958, avg=18524.22, stdev=1824.27 00:25:27.827 lat (usec): min=5198, max=30960, avg=18526.87, stdev=1823.81 00:25:27.827 clat percentiles (usec): 00:25:27.827 | 1.00th=[14353], 5.00th=[16057], 10.00th=[16450], 20.00th=[17171], 00:25:27.827 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:25:27.827 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20579], 95.00th=[21103], 00:25:27.827 | 99.00th=[22414], 99.50th=[22938], 99.90th=[29230], 99.95th=[30802], 00:25:27.827 | 99.99th=[31065] 00:25:27.827 bw ( KiB/s): min=14328, max=15520, per=99.98%, avg=14956.00, stdev=491.96, samples=4 00:25:27.827 iops : min= 3582, max= 3880, avg=3739.00, stdev=122.99, samples=4 00:25:27.827 write: IOPS=3764, BW=14.7MiB/s (15.4MB/s)(29.6MiB/2016msec); 0 zone resets 00:25:27.827 slat (usec): min=2, max=245, avg= 2.73, stdev= 3.02 00:25:27.827 clat (usec): min=2481, max=30485, avg=15384.94, stdev=1570.18 00:25:27.827 lat (usec): min=2497, max=30487, avg=15387.66, stdev=1569.75 00:25:27.827 clat percentiles (usec): 00:25:27.827 | 1.00th=[12125], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:25:27.827 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:25:27.827 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17171], 00:25:27.827 | 99.00th=[18482], 99.50th=[20055], 99.90th=[27132], 99.95th=[29230], 00:25:27.827 | 99.99th=[30540] 00:25:27.827 bw ( KiB/s): min=14704, max=15312, per=99.92%, avg=15048.00, stdev=253.24, samples=4 00:25:27.827 iops : min= 3676, max= 3828, avg=3762.00, stdev=63.31, samples=4 00:25:27.827 lat (msec) : 4=0.07%, 10=0.27%, 20=89.17%, 50=10.49% 00:25:27.827 cpu : usr=69.68%, sys=28.29%, ctx=74, majf=0, minf=5 00:25:27.827 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:27.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:27.827 issued rwts: total=7539,7590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:27.827 00:25:27.827 Run status group 0 (all jobs): 00:25:27.827 READ: bw=14.6MiB/s (15.3MB/s), 14.6MiB/s-14.6MiB/s (15.3MB/s-15.3MB/s), io=29.4MiB (30.9MB), run=2016-2016msec 00:25:27.827 WRITE: bw=14.7MiB/s (15.4MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=29.6MiB (31.1MB), run=2016-2016msec 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:27.827 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:27.828 12:12:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:28.085 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:28.085 fio-3.35 00:25:28.085 Starting 1 thread 00:25:28.085 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.619 00:25:30.619 test: (groupid=0, jobs=1): err= 0: pid=44153: Thu Jul 25 12:12:07 2024 00:25:30.619 read: IOPS=4779, BW=74.7MiB/s (78.3MB/s)(150MiB/2009msec) 00:25:30.619 slat (usec): min=3, max=125, avg= 4.27, stdev= 1.47 00:25:30.619 clat (usec): min=4662, max=34688, avg=15373.27, stdev=5443.78 00:25:30.619 lat (usec): min=4666, max=34692, avg=15377.54, stdev=5443.79 00:25:30.619 clat percentiles (usec): 00:25:30.619 | 1.00th=[ 5735], 5.00th=[ 7111], 10.00th=[ 8160], 20.00th=[ 9896], 00:25:30.619 | 30.00th=[11469], 40.00th=[13566], 50.00th=[15926], 60.00th=[17695], 00:25:30.619 | 70.00th=[19006], 80.00th=[19792], 90.00th=[21627], 95.00th=[23987], 00:25:30.619 | 99.00th=[29230], 99.50th=[29754], 99.90th=[32637], 99.95th=[34341], 00:25:30.619 | 99.99th=[34866] 00:25:30.619 bw ( KiB/s): min=26432, max=66240, per=51.88%, avg=39672.00, stdev=18005.14, samples=4 00:25:30.619 iops : min= 1652, max= 4140, avg=2479.50, stdev=1125.32, samples=4 00:25:30.619 write: IOPS=2763, BW=43.2MiB/s (45.3MB/s)(81.7MiB/1892msec); 0 zone resets 00:25:30.619 slat (usec): min=45, max=255, avg=46.97, stdev= 4.85 00:25:30.619 clat (usec): min=6728, max=41020, avg=20054.77, stdev=7567.12 00:25:30.619 lat (usec): min=6774, max=41067, avg=20101.73, stdev=7566.89 00:25:30.619 clat percentiles (usec): 00:25:30.619 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11731], 00:25:30.619 | 30.00th=[13304], 40.00th=[15795], 50.00th=[20841], 60.00th=[23987], 00:25:30.619 | 70.00th=[25822], 80.00th=[27395], 90.00th=[29754], 95.00th=[31327], 00:25:30.619 | 99.00th=[33817], 99.50th=[36439], 99.90th=[38536], 99.95th=[39060], 00:25:30.619 | 99.99th=[41157] 00:25:30.619 bw ( KiB/s): min=27136, max=68672, per=93.10%, avg=41160.00, stdev=18700.86, samples=4 00:25:30.619 iops : min= 1696, max= 4292, avg=2572.50, stdev=1168.80, samples=4 00:25:30.619 lat (msec) : 10=15.76%, 20=53.34%, 50=30.90% 00:25:30.619 cpu : usr=79.58%, sys=18.53%, ctx=109, majf=0, minf=2 00:25:30.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:30.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:30.619 issued rwts: total=9602,5228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:30.619 00:25:30.619 Run status group 0 (all jobs): 00:25:30.619 READ: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=150MiB (157MB), run=2009-2009msec 00:25:30.619 WRITE: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=81.7MiB (85.7MB), run=1892-1892msec 00:25:30.619 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.619 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:30.619 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:30.619 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:30.619 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:30.619 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.619 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.878 rmmod nvme_tcp 00:25:30.878 rmmod nvme_fabrics 00:25:30.878 rmmod nvme_keyring 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:30.878 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 42820 ']' 00:25:30.879 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 42820 00:25:30.879 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 42820 ']' 00:25:30.879 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 42820 00:25:30.879 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:30.879 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.879 12:12:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 42820 00:25:30.879 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:30.879 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:30.879 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 42820' 00:25:30.879 killing process with pid 42820 00:25:30.879 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 42820 00:25:30.879 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 42820 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.138 12:12:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.043 12:12:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.043 00:25:33.043 real 0m16.674s 00:25:33.043 user 1m1.450s 00:25:33.043 sys 0m6.717s 00:25:33.043 12:12:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:33.043 12:12:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.043 ************************************ 00:25:33.043 END TEST nvmf_fio_host 00:25:33.043 ************************************ 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.302 ************************************ 00:25:33.302 START TEST nvmf_failover 00:25:33.302 ************************************ 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:33.302 * Looking for test storage... 00:25:33.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.302 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.303 12:12:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:39.876 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:39.876 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:39.876 Found net devices under 0000:af:00.0: cvl_0_0 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:39.876 Found net devices under 0000:af:00.1: cvl_0_1 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.876 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:25:39.877 00:25:39.877 --- 10.0.0.2 ping statistics --- 00:25:39.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.877 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:25:39.877 00:25:39.877 --- 10.0.0.1 ping statistics --- 00:25:39.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.877 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=48623 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 48623 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 48623 ']' 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.877 12:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.877 [2024-07-25 12:12:16.490969] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:25:39.877 [2024-07-25 12:12:16.491036] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.877 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.877 [2024-07-25 12:12:16.579178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:39.877 [2024-07-25 12:12:16.684115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.877 [2024-07-25 12:12:16.684160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.877 [2024-07-25 12:12:16.684173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.877 [2024-07-25 12:12:16.684185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.877 [2024-07-25 12:12:16.684194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.877 [2024-07-25 12:12:16.684323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.877 [2024-07-25 12:12:16.684360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.877 [2024-07-25 12:12:16.684361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.136 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.136 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:40.136 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.136 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.136 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.136 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.136 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:40.394 [2024-07-25 12:12:17.626196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.394 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:40.652 Malloc0 00:25:40.911 12:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.170 12:12:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.429 12:12:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.429 [2024-07-25 12:12:18.712876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.687 12:12:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:41.687 [2024-07-25 12:12:18.977989] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:41.945 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:41.945 [2024-07-25 12:12:19.234981] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=49087 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 49087 /var/tmp/bdevperf.sock 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 49087 ']' 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:42.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.203 12:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:43.138 12:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:43.139 12:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:43.139 12:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:43.396 NVMe0n1 00:25:43.396 12:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:43.964 00:25:43.964 12:12:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=49467 00:25:43.964 12:12:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.964 12:12:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:44.938 12:12:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.197 12:12:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:48.482 12:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.482 00:25:48.482 12:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.741 12:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:52.025 12:12:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.025 [2024-07-25 12:12:29.183761] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.025 12:12:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:52.959 12:12:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:53.217 12:12:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 49467 00:25:59.783 0 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 49087 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 49087 ']' 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 49087 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 49087 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 49087' 00:25:59.783 killing process with pid 49087 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 49087 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 49087 00:25:59.783 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.783 [2024-07-25 12:12:19.317786] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:25:59.783 [2024-07-25 12:12:19.317854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49087 ] 00:25:59.783 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.783 [2024-07-25 12:12:19.399391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.783 [2024-07-25 12:12:19.487982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.783 Running I/O for 15 seconds... 00:25:59.783 [2024-07-25 12:12:22.297038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.783 [2024-07-25 12:12:22.297087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.783 [2024-07-25 12:12:22.297291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.783 [2024-07-25 12:12:22.297302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.784 [2024-07-25 12:12:22.297468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.784 [2024-07-25 12:12:22.297491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.297989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.784 [2024-07-25 12:12:22.298170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.784 [2024-07-25 12:12:22.298182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.785 [2024-07-25 12:12:22.298539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34392 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34400 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34408 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34416 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34424 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34432 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34440 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34448 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34456 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34464 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34472 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.298969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.298977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34480 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.298986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.298995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.299003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.785 [2024-07-25 12:12:22.299011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34488 len:8 PRP1 0x0 PRP2 0x0 00:25:59.785 [2024-07-25 12:12:22.299023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.785 [2024-07-25 12:12:22.299032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.785 [2024-07-25 12:12:22.299040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34504 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34512 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34520 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34528 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34536 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34544 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34552 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34560 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34568 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34576 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34592 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34600 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34608 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34616 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34624 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34632 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34640 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34648 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34656 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34664 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.786 [2024-07-25 12:12:22.299826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.786 [2024-07-25 12:12:22.299835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34672 len:8 PRP1 0x0 PRP2 0x0 00:25:59.786 [2024-07-25 12:12:22.299844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.786 [2024-07-25 12:12:22.299857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.299864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.299871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34680 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.299882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.299892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.299899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.299907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34688 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.299916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.299926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.299933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.299941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34696 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.299950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.299960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.299967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.299975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34704 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.299985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.299995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34712 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34720 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34728 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34736 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34744 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34752 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34760 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34768 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.300274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.300281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.300289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34776 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.300298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34784 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34792 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34800 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34808 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34816 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34824 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33832 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33840 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33848 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.787 [2024-07-25 12:12:22.310466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33856 len:8 PRP1 0x0 PRP2 0x0 00:25:59.787 [2024-07-25 12:12:22.310479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.787 [2024-07-25 12:12:22.310494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.787 [2024-07-25 12:12:22.310505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.788 [2024-07-25 12:12:22.310516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33864 len:8 PRP1 0x0 PRP2 0x0 00:25:59.788 [2024-07-25 12:12:22.310529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:22.310542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.788 [2024-07-25 12:12:22.310552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.788 [2024-07-25 12:12:22.310564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33872 len:8 PRP1 0x0 PRP2 0x0 00:25:59.788 [2024-07-25 12:12:22.310577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:22.310591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.788 [2024-07-25 12:12:22.310601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.788 [2024-07-25 12:12:22.310621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33880 len:8 PRP1 0x0 PRP2 0x0 00:25:59.788 [2024-07-25 12:12:22.310633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:22.310690] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe2b160 was disconnected and freed. reset controller. 00:25:59.788 [2024-07-25 12:12:22.310706] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:59.788 [2024-07-25 12:12:22.310740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.788 [2024-07-25 12:12:22.310756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:22.310771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.788 [2024-07-25 12:12:22.310784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:22.310799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.788 [2024-07-25 12:12:22.310813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:22.310827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.788 [2024-07-25 12:12:22.310841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:22.310854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.788 [2024-07-25 12:12:22.310899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0ba30 (9): Bad file descriptor 00:25:59.788 [2024-07-25 12:12:22.316728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.788 [2024-07-25 12:12:22.397794] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.788 [2024-07-25 12:12:25.912827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.912877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.912896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.912913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.912926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.912935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.912947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.912958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.912970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.912980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.912993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.788 [2024-07-25 12:12:25.913158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.788 [2024-07-25 12:12:25.913340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.788 [2024-07-25 12:12:25.913352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.913511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.913986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.913995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.914018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.914041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.914062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.914083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.789 [2024-07-25 12:12:25.914106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.914127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.914148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.914170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.789 [2024-07-25 12:12:25.914191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.789 [2024-07-25 12:12:25.914203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.914981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.914994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.915003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.915016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.915025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.790 [2024-07-25 12:12:25.915037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.790 [2024-07-25 12:12:25.915047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.791 [2024-07-25 12:12:25.915655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.791 [2024-07-25 12:12:25.915689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.791 [2024-07-25 12:12:25.915698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93760 len:8 PRP1 0x0 PRP2 0x0 00:25:59.791 [2024-07-25 12:12:25.915708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915760] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe2d110 was disconnected and freed. reset controller. 00:25:59.791 [2024-07-25 12:12:25.915773] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:59.791 [2024-07-25 12:12:25.915799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.791 [2024-07-25 12:12:25.915810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.791 [2024-07-25 12:12:25.915830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.791 [2024-07-25 12:12:25.915850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.791 [2024-07-25 12:12:25.915869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:25.915879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.791 [2024-07-25 12:12:25.920133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.791 [2024-07-25 12:12:25.920166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0ba30 (9): Bad file descriptor 00:25:59.791 [2024-07-25 12:12:25.968715] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.791 [2024-07-25 12:12:30.457861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.791 [2024-07-25 12:12:30.457910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:30.457930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.791 [2024-07-25 12:12:30.457942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.791 [2024-07-25 12:12:30.457954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.791 [2024-07-25 12:12:30.457965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.457977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.457986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.792 [2024-07-25 12:12:30.458553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.792 [2024-07-25 12:12:30.458616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.792 [2024-07-25 12:12:30.458627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.458985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.458994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.459016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.459037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.459058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.459080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.459100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.459122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.793 [2024-07-25 12:12:30.459145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.793 [2024-07-25 12:12:30.459465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.793 [2024-07-25 12:12:30.459475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.459975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.459988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.794 [2024-07-25 12:12:30.460328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.794 [2024-07-25 12:12:30.460337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.795 [2024-07-25 12:12:30.460711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2cdd0 is same with the state(5) to be set 00:25:59.795 [2024-07-25 12:12:30.460733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.795 [2024-07-25 12:12:30.460741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.795 [2024-07-25 12:12:30.460749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63128 len:8 PRP1 0x0 PRP2 0x0 00:25:59.795 [2024-07-25 12:12:30.460759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460809] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe2cdd0 was disconnected and freed. reset controller. 00:25:59.795 [2024-07-25 12:12:30.460822] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:59.795 [2024-07-25 12:12:30.460849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.795 [2024-07-25 12:12:30.460861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.795 [2024-07-25 12:12:30.460887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.795 [2024-07-25 12:12:30.460907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.795 [2024-07-25 12:12:30.460928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.795 [2024-07-25 12:12:30.460937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.795 [2024-07-25 12:12:30.465200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.795 [2024-07-25 12:12:30.465236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0ba30 (9): Bad file descriptor 00:25:59.795 [2024-07-25 12:12:30.635762] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.795 00:25:59.795 Latency(us) 00:25:59.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.795 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.795 Verification LBA range: start 0x0 length 0x4000 00:25:59.795 NVMe0n1 : 15.02 4943.34 19.31 615.39 0.00 22989.83 949.53 32887.16 00:25:59.795 =================================================================================================================== 00:25:59.795 Total : 4943.34 19.31 615.39 0.00 22989.83 949.53 32887.16 00:25:59.795 Received shutdown signal, test time was about 15.000000 seconds 00:25:59.795 00:25:59.795 Latency(us) 00:25:59.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.795 =================================================================================================================== 00:25:59.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=52102 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 52102 /var/tmp/bdevperf.sock 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 52102 ']' 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:59.795 12:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:59.795 [2024-07-25 12:12:37.065505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:00.054 12:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:00.054 [2024-07-25 12:12:37.250250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:00.054 12:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.621 NVMe0n1 00:26:00.621 12:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.878 00:26:00.878 12:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.444 00:26:01.445 12:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:01.445 12:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.703 12:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.961 12:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:05.262 12:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.262 12:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:05.262 12:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=53148 00:26:05.262 12:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:05.262 12:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 53148 00:26:06.640 0 00:26:06.640 12:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.640 [2024-07-25 12:12:36.565505] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:26:06.640 [2024-07-25 12:12:36.565569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52102 ] 00:26:06.640 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.640 [2024-07-25 12:12:36.647107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.640 [2024-07-25 12:12:36.734316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.640 [2024-07-25 12:12:39.126951] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:06.640 [2024-07-25 12:12:39.127010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.640 [2024-07-25 12:12:39.127026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.640 [2024-07-25 12:12:39.127038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.640 [2024-07-25 12:12:39.127049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.640 [2024-07-25 12:12:39.127060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.640 [2024-07-25 12:12:39.127070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.640 [2024-07-25 12:12:39.127081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.640 [2024-07-25 12:12:39.127091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.640 [2024-07-25 12:12:39.127101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.640 [2024-07-25 12:12:39.127134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.640 [2024-07-25 12:12:39.127153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b5a30 (9): Bad file descriptor 00:26:06.640 [2024-07-25 12:12:39.180243] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:06.640 Running I/O for 1 seconds... 00:26:06.640 00:26:06.640 Latency(us) 00:26:06.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.641 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:06.641 Verification LBA range: start 0x0 length 0x4000 00:26:06.641 NVMe0n1 : 1.01 3785.75 14.79 0.00 0.00 33633.41 2129.92 37891.72 00:26:06.641 =================================================================================================================== 00:26:06.641 Total : 3785.75 14.79 0.00 0.00 33633.41 2129.92 37891.72 00:26:06.641 12:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.641 12:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:06.641 12:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.899 12:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.899 12:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:07.157 12:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:07.415 12:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 52102 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 52102 ']' 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 52102 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 52102 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 52102' 00:26:10.701 killing process with pid 52102 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 52102 00:26:10.701 12:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 52102 00:26:10.960 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:10.960 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.219 rmmod nvme_tcp 00:26:11.219 rmmod nvme_fabrics 00:26:11.219 rmmod nvme_keyring 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 48623 ']' 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 48623 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 48623 ']' 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 48623 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 48623 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 48623' 00:26:11.219 killing process with pid 48623 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 48623 00:26:11.219 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 48623 00:26:11.787 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:11.787 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:11.787 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:11.787 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.787 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.787 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.787 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.788 12:12:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:13.691 00:26:13.691 real 0m40.452s 00:26:13.691 user 2m11.134s 00:26:13.691 sys 0m7.858s 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:13.691 ************************************ 00:26:13.691 END TEST nvmf_failover 00:26:13.691 ************************************ 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.691 ************************************ 00:26:13.691 START TEST nvmf_host_discovery 00:26:13.691 ************************************ 00:26:13.691 12:12:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.951 * Looking for test storage... 00:26:13.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:13.951 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:13.952 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.952 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.952 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.952 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:13.952 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:13.952 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:13.952 12:12:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.224 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:19.224 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:19.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:19.225 Found net devices under 0000:af:00.0: cvl_0_0 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:19.225 Found net devices under 0000:af:00.1: cvl_0_1 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.225 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:26:19.484 00:26:19.484 --- 10.0.0.2 ping statistics --- 00:26:19.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.484 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:19.484 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:26:19.484 00:26:19.484 --- 10.0.0.1 ping statistics --- 00:26:19.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.484 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=57718 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 57718 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 57718 ']' 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.743 12:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.743 [2024-07-25 12:12:56.881624] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:26:19.743 [2024-07-25 12:12:56.881682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.743 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.743 [2024-07-25 12:12:56.968091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.001 [2024-07-25 12:12:57.071520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.002 [2024-07-25 12:12:57.071565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.002 [2024-07-25 12:12:57.071579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.002 [2024-07-25 12:12:57.071589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.002 [2024-07-25 12:12:57.071600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.002 [2024-07-25 12:12:57.071632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.938 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.938 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:20.938 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.938 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.938 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.939 [2024-07-25 12:12:58.116211] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.939 [2024-07-25 12:12:58.128386] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.939 null0 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.939 null1 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=57957 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 57957 /tmp/host.sock 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 57957 ']' 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:20.939 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.939 12:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 [2024-07-25 12:12:58.243181] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:26:21.215 [2024-07-25 12:12:58.243297] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57957 ] 00:26:21.215 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.215 [2024-07-25 12:12:58.362480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.215 [2024-07-25 12:12:58.450180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.153 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.412 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 [2024-07-25 12:12:59.516345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.413 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.672 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:26:22.672 12:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:22.931 [2024-07-25 12:13:00.229807] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:22.931 [2024-07-25 12:13:00.229837] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:22.931 [2024-07-25 12:13:00.229855] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.189 [2024-07-25 12:13:00.317161] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:23.189 [2024-07-25 12:13:00.420981] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:23.189 [2024-07-25 12:13:00.421005] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.449 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.708 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.709 12:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:23.967 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:23.968 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.968 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:23.968 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:23.968 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.968 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.968 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.227 [2024-07-25 12:13:01.313918] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:24.227 [2024-07-25 12:13:01.314427] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:24.227 [2024-07-25 12:13:01.314455] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:24.227 [2024-07-25 12:13:01.442288] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:24.227 12:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:24.486 [2024-07-25 12:13:01.544113] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:24.486 [2024-07-25 12:13:01.544135] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:24.486 [2024-07-25 12:13:01.544142] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.421 [2024-07-25 12:13:02.598456] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:25.421 [2024-07-25 12:13:02.598484] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.421 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:25.421 [2024-07-25 12:13:02.606504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.421 [2024-07-25 12:13:02.606528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.421 [2024-07-25 12:13:02.606541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.421 [2024-07-25 12:13:02.606551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.421 [2024-07-25 12:13:02.606561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.421 [2024-07-25 12:13:02.606570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.422 [2024-07-25 12:13:02.606580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.422 [2024-07-25 12:13:02.606590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.422 [2024-07-25 12:13:02.606600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190490 is same with the state(5) to be set 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.422 [2024-07-25 12:13:02.616514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190490 (9): Bad file descriptor 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.422 [2024-07-25 12:13:02.626557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:25.422 [2024-07-25 12:13:02.626943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.422 [2024-07-25 12:13:02.626964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190490 with addr=10.0.0.2, port=4420 00:26:25.422 [2024-07-25 12:13:02.626975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190490 is same with the state(5) to be set 00:26:25.422 [2024-07-25 12:13:02.626991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190490 (9): Bad file descriptor 00:26:25.422 [2024-07-25 12:13:02.627014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:25.422 [2024-07-25 12:13:02.627024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:25.422 [2024-07-25 12:13:02.627035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:25.422 [2024-07-25 12:13:02.627050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.422 [2024-07-25 12:13:02.636623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:25.422 [2024-07-25 12:13:02.636931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.422 [2024-07-25 12:13:02.636954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190490 with addr=10.0.0.2, port=4420 00:26:25.422 [2024-07-25 12:13:02.636965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190490 is same with the state(5) to be set 00:26:25.422 [2024-07-25 12:13:02.636980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190490 (9): Bad file descriptor 00:26:25.422 [2024-07-25 12:13:02.636995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:25.422 [2024-07-25 12:13:02.637003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:25.422 [2024-07-25 12:13:02.637013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:25.422 [2024-07-25 12:13:02.637027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.422 [2024-07-25 12:13:02.646688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:25.422 [2024-07-25 12:13:02.646983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.422 [2024-07-25 12:13:02.647002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190490 with addr=10.0.0.2, port=4420 00:26:25.422 [2024-07-25 12:13:02.647013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190490 is same with the state(5) to be set 00:26:25.422 [2024-07-25 12:13:02.647029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190490 (9): Bad file descriptor 00:26:25.422 [2024-07-25 12:13:02.647042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:25.422 [2024-07-25 12:13:02.647051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:25.422 [2024-07-25 12:13:02.647061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:25.422 [2024-07-25 12:13:02.647075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.422 [2024-07-25 12:13:02.656754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:25.422 [2024-07-25 12:13:02.657035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.422 [2024-07-25 12:13:02.657052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190490 with addr=10.0.0.2, port=4420 00:26:25.422 [2024-07-25 12:13:02.657063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190490 is same with the state(5) to be set 00:26:25.422 [2024-07-25 12:13:02.657079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190490 (9): Bad file descriptor 00:26:25.422 [2024-07-25 12:13:02.657093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:25.422 [2024-07-25 12:13:02.657104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:25.422 [2024-07-25 12:13:02.657115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:25.422 [2024-07-25 12:13:02.657128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.422 [2024-07-25 12:13:02.666813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:25.422 [2024-07-25 12:13:02.667116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.422 [2024-07-25 12:13:02.667133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190490 with addr=10.0.0.2, port=4420 00:26:25.422 [2024-07-25 12:13:02.667146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190490 is same with the state(5) to be set 00:26:25.422 [2024-07-25 12:13:02.667162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190490 (9): Bad file descriptor 00:26:25.422 [2024-07-25 12:13:02.667185] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:25.422 [2024-07-25 12:13:02.667196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:25.422 [2024-07-25 12:13:02.667206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:25.422 [2024-07-25 12:13:02.667222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.422 [2024-07-25 12:13:02.676876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:25.422 [2024-07-25 12:13:02.677164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.422 [2024-07-25 12:13:02.677182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190490 with addr=10.0.0.2, port=4420 00:26:25.422 [2024-07-25 12:13:02.677193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190490 is same with the state(5) to be set 00:26:25.422 [2024-07-25 12:13:02.677208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190490 (9): Bad file descriptor 00:26:25.422 [2024-07-25 12:13:02.677231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:25.422 [2024-07-25 12:13:02.677241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:25.422 [2024-07-25 12:13:02.677252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:25.422 [2024-07-25 12:13:02.677266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.422 [2024-07-25 12:13:02.686778] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:25.422 [2024-07-25 12:13:02.686800] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.422 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.423 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:25.423 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:25.423 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.718 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.719 12:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.097 [2024-07-25 12:13:04.059445] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:27.097 [2024-07-25 12:13:04.059465] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:27.097 [2024-07-25 12:13:04.059482] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:27.097 [2024-07-25 12:13:04.147844] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:27.097 [2024-07-25 12:13:04.213778] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:27.097 [2024-07-25 12:13:04.213812] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.097 request: 00:26:27.097 { 00:26:27.097 "name": "nvme", 00:26:27.097 "trtype": "tcp", 00:26:27.097 "traddr": "10.0.0.2", 00:26:27.097 "adrfam": "ipv4", 00:26:27.097 "trsvcid": "8009", 00:26:27.097 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.097 "wait_for_attach": true, 00:26:27.097 "method": "bdev_nvme_start_discovery", 00:26:27.097 "req_id": 1 00:26:27.097 } 00:26:27.097 Got JSON-RPC error response 00:26:27.097 response: 00:26:27.097 { 00:26:27.097 "code": -17, 00:26:27.097 "message": "File exists" 00:26:27.097 } 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.097 request: 00:26:27.097 { 00:26:27.097 "name": "nvme_second", 00:26:27.097 "trtype": "tcp", 00:26:27.097 "traddr": "10.0.0.2", 00:26:27.097 "adrfam": "ipv4", 00:26:27.097 "trsvcid": "8009", 00:26:27.097 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.097 "wait_for_attach": true, 00:26:27.097 "method": "bdev_nvme_start_discovery", 00:26:27.097 "req_id": 1 00:26:27.097 } 00:26:27.097 Got JSON-RPC error response 00:26:27.097 response: 00:26:27.097 { 00:26:27.097 "code": -17, 00:26:27.097 "message": "File exists" 00:26:27.097 } 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:27.097 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.356 12:13:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.292 [2024-07-25 12:13:05.478490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.292 [2024-07-25 12:13:05.478523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118cfc0 with addr=10.0.0.2, port=8010 00:26:28.292 [2024-07-25 12:13:05.478541] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:28.292 [2024-07-25 12:13:05.478550] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:28.292 [2024-07-25 12:13:05.478560] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:29.229 [2024-07-25 12:13:06.480931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.229 [2024-07-25 12:13:06.480963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118cfc0 with addr=10.0.0.2, port=8010 00:26:29.229 [2024-07-25 12:13:06.480978] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:29.229 [2024-07-25 12:13:06.480987] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:29.229 [2024-07-25 12:13:06.480996] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:30.606 [2024-07-25 12:13:07.483042] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:30.606 request: 00:26:30.606 { 00:26:30.606 "name": "nvme_second", 00:26:30.606 "trtype": "tcp", 00:26:30.606 "traddr": "10.0.0.2", 00:26:30.606 "adrfam": "ipv4", 00:26:30.606 "trsvcid": "8010", 00:26:30.606 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:30.606 "wait_for_attach": false, 00:26:30.606 "attach_timeout_ms": 3000, 00:26:30.606 "method": "bdev_nvme_start_discovery", 00:26:30.606 "req_id": 1 00:26:30.606 } 00:26:30.606 Got JSON-RPC error response 00:26:30.606 response: 00:26:30.606 { 00:26:30.606 "code": -110, 00:26:30.606 "message": "Connection timed out" 00:26:30.606 } 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 57957 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.606 rmmod nvme_tcp 00:26:30.606 rmmod nvme_fabrics 00:26:30.606 rmmod nvme_keyring 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 57718 ']' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 57718 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 57718 ']' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 57718 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57718 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57718' 00:26:30.606 killing process with pid 57718 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 57718 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 57718 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.606 12:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.140 12:13:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.140 00:26:33.140 real 0m19.035s 00:26:33.140 user 0m24.250s 00:26:33.140 sys 0m5.959s 00:26:33.140 12:13:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.140 12:13:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.140 ************************************ 00:26:33.140 END TEST nvmf_host_discovery 00:26:33.140 ************************************ 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.140 ************************************ 00:26:33.140 START TEST nvmf_host_multipath_status 00:26:33.140 ************************************ 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:33.140 * Looking for test storage... 00:26:33.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.140 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.141 12:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:38.445 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.445 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.446 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:38.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:38.446 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.446 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.446 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:38.705 Found net devices under 0000:af:00.0: cvl_0_0 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:38.705 Found net devices under 0000:af:00.1: cvl_0_1 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:38.705 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:38.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:26:38.706 00:26:38.706 --- 10.0.0.2 ping statistics --- 00:26:38.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.706 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:26:38.706 12:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:26:38.965 00:26:38.965 --- 10.0.0.1 ping statistics --- 00:26:38.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.965 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=63417 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 63417 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 63417 ']' 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.965 12:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:38.965 [2024-07-25 12:13:16.109050] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:26:38.965 [2024-07-25 12:13:16.109106] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.965 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.965 [2024-07-25 12:13:16.197704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:39.223 [2024-07-25 12:13:16.287761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.224 [2024-07-25 12:13:16.287802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.224 [2024-07-25 12:13:16.287812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.224 [2024-07-25 12:13:16.287821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.224 [2024-07-25 12:13:16.287828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.224 [2024-07-25 12:13:16.287879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.224 [2024-07-25 12:13:16.287885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=63417 00:26:39.792 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:40.051 [2024-07-25 12:13:17.307422] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.051 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:40.309 Malloc0 00:26:40.309 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:40.568 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.826 12:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.084 [2024-07-25 12:13:18.129858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.084 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:41.343 [2024-07-25 12:13:18.394681] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=63956 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 63956 /var/tmp/bdevperf.sock 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 63956 ']' 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.343 12:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.280 12:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.280 12:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:42.280 12:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:42.280 12:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:42.847 Nvme0n1 00:26:42.847 12:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:43.105 Nvme0n1 00:26:43.105 12:13:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:43.105 12:13:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:45.638 12:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:45.638 12:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:45.638 12:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:45.638 12:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:46.574 12:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:46.574 12:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:46.574 12:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.574 12:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:46.832 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.832 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:46.832 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.832 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.090 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.090 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.090 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.090 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:47.349 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.349 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:47.349 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.349 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:47.607 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.607 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:47.607 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:47.607 12:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.865 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.865 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:47.865 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:47.865 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.123 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.123 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:48.123 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.381 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:48.639 12:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:50.015 12:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:50.015 12:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:50.015 12:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.015 12:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.015 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.015 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.015 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.015 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.274 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.274 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.274 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.274 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.532 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.532 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.532 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.532 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.790 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.790 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.790 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.790 12:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.051 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.051 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.051 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.051 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.309 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.309 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:51.309 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:51.567 12:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:51.826 12:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:52.761 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:52.761 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:52.761 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.761 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.020 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.020 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.020 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.020 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.278 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.278 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.278 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.278 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.537 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.537 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.537 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.537 12:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.795 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.795 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.795 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.795 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.054 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.054 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:54.054 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.054 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:54.312 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.312 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:54.312 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:54.570 12:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:54.829 12:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:55.765 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:55.765 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:55.765 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.765 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:56.024 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.024 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:56.024 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.024 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.282 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.282 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.282 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.282 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.848 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.849 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.849 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.849 12:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.849 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.849 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:56.849 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.849 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:57.107 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.107 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:57.107 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.107 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:57.366 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.366 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:57.366 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:57.626 12:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:57.884 12:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.260 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:59.519 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.519 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:59.519 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.519 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:59.778 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.778 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:59.778 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.778 12:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:00.036 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.036 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:00.036 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.036 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:00.295 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:00.295 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:00.295 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.295 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:00.554 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:00.554 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:00.554 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:00.812 12:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:01.071 12:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:02.005 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:02.005 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:02.005 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.005 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.264 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.264 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:02.264 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.264 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.522 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.522 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.522 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.522 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.781 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.782 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.782 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.782 12:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.040 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.041 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:03.041 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.041 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.301 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:03.301 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:03.301 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.301 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:03.593 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.593 12:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:03.851 12:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:03.851 12:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:04.109 12:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:04.368 12:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:05.303 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:05.303 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:05.303 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.303 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.562 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.562 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:05.562 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.562 12:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.821 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.821 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.821 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.821 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:06.079 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.079 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:06.079 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.079 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.646 12:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.905 12:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.905 12:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:06.905 12:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:07.164 12:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:07.422 12:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.797 12:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.056 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.056 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.056 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.056 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.314 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.314 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.314 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.314 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.573 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.573 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.573 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.573 12:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.832 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.832 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.832 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.832 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:10.091 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.091 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:10.091 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.349 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:10.608 12:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:11.545 12:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:11.545 12:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.545 12:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.545 12:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.804 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.804 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:11.804 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.804 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.063 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.063 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.063 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.063 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.631 12:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.889 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.889 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:12.889 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.889 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.148 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.148 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:13.148 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:13.406 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:13.665 12:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:15.040 12:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:15.040 12:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:15.040 12:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.040 12:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.040 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.040 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:15.040 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.040 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.298 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.298 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.298 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.298 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.557 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.557 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.557 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.557 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.816 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.816 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.816 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.816 12:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.115 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.115 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:16.115 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.115 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 63956 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 63956 ']' 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 63956 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63956 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63956' 00:27:16.374 killing process with pid 63956 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 63956 00:27:16.374 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 63956 00:27:16.645 Connection closed with partial response: 00:27:16.645 00:27:16.645 00:27:16.645 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 63956 00:27:16.645 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:16.645 [2024-07-25 12:13:18.474555] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:27:16.645 [2024-07-25 12:13:18.474611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63956 ] 00:27:16.645 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.645 [2024-07-25 12:13:18.576286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.645 [2024-07-25 12:13:18.720208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.645 Running I/O for 90 seconds... 00:27:16.645 [2024-07-25 12:13:34.882434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.645 [2024-07-25 12:13:34.882511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.882568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.882594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.882652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.882676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.882718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.882741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.882784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.882808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.882850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.882873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.882914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.882936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.882977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.645 [2024-07-25 12:13:34.883657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.645 [2024-07-25 12:13:34.883699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.883721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.883761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.883783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.883825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.883847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.883888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.883910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.883950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.883973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.884527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.884550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.886715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.886761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.886809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.886832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.886874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.886897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.886944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.886968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.887931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.887971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.646 [2024-07-25 12:13:34.887993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.888034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.888056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.888096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.888119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.888159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.888183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.888223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.888245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.646 [2024-07-25 12:13:34.888286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.646 [2024-07-25 12:13:34.888308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.888953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.888993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.889015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.889078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.889142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.647 [2024-07-25 12:13:34.889204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.889939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.889961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.647 [2024-07-25 12:13:34.890692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.647 [2024-07-25 12:13:34.890713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.890754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.890776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.890826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.890848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.890888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.890909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.890950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.890972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.891012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.891034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.891073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.891095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.891136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.891157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.892960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.892998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.893946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.893968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.648 [2024-07-25 12:13:34.894663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.648 [2024-07-25 12:13:34.894913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.648 [2024-07-25 12:13:34.894957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.894979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.895942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.895983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.896550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.896572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.898232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.898273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.898319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.898342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.898383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.898406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.898447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.898470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.898510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.649 [2024-07-25 12:13:34.898532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.898573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.649 [2024-07-25 12:13:34.898595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.649 [2024-07-25 12:13:34.898649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.898673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.898713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.898736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.898776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.898799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.898839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.898862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.898902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.898931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.898972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.898995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.650 [2024-07-25 12:13:34.899565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.899956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.899997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.650 [2024-07-25 12:13:34.900796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.650 [2024-07-25 12:13:34.900859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.650 [2024-07-25 12:13:34.900922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.900962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.650 [2024-07-25 12:13:34.900984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.650 [2024-07-25 12:13:34.901024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.901940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.901980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.902685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.902707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.904956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.904979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.905019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.905042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.905083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.905105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.905147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.905170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.905210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.651 [2024-07-25 12:13:34.905232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.651 [2024-07-25 12:13:34.905272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.905943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.905984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-07-25 12:13:34.906265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.906945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.906972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.652 [2024-07-25 12:13:34.907715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.652 [2024-07-25 12:13:34.907737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.907787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.907809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.907850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.907872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.907912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.907934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.907975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.907997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.908039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.908061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.908102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.908124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.909753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.909793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.909838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.909861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.909902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.909924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.909965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.909987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.910050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.910115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.910962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.910991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.653 [2024-07-25 12:13:34.911189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.653 [2024-07-25 12:13:34.911831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-07-25 12:13:34.911855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.911897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.911921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.911962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.911987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.912051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.912117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.912182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.912247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.912311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.912377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.654 [2024-07-25 12:13:34.912442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.912952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.912976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.913938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.913961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.654 [2024-07-25 12:13:34.914002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.654 [2024-07-25 12:13:34.914026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.914067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.914091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.914132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.914156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.914196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.914224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.914266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.914289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.914331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.914355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.916947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.916970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.917954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.917995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.655 [2024-07-25 12:13:34.918018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.918059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.918083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.918124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.918147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.918188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.918212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.918253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.655 [2024-07-25 12:13:34.918277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.655 [2024-07-25 12:13:34.918318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.918975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.918999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.919830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.919854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.921543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.921630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.921696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.921761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.921825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.921889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.656 [2024-07-25 12:13:34.921954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.921995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.922060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.922125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.922190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.922254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.922318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.922383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.656 [2024-07-25 12:13:34.922453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.656 [2024-07-25 12:13:34.922477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.922542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.922615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.922681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.922746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.922811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.922875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.922941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.922982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.923006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.923935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.923977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.924001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.924071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.924137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.924204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.657 [2024-07-25 12:13:34.924270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.924942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.924983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.657 [2024-07-25 12:13:34.925006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.657 [2024-07-25 12:13:34.925047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.925965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.925988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.926029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.926053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.926094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.926119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.927893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.927930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.927974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.927999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.928959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.928983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.929024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.929051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.929092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.658 [2024-07-25 12:13:34.929116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.658 [2024-07-25 12:13:34.929157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.659 [2024-07-25 12:13:34.929840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.929946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.929969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.930950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.930991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.659 [2024-07-25 12:13:34.931534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.659 [2024-07-25 12:13:34.931576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.931600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.932737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.932813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.932897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.932950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.932974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.933913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.933966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.660 [2024-07-25 12:13:34.933990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.934925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.934978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.935002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.660 [2024-07-25 12:13:34.935054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.660 [2024-07-25 12:13:34.935078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.661 [2024-07-25 12:13:34.935155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.661 [2024-07-25 12:13:34.935232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.661 [2024-07-25 12:13:34.935309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.661 [2024-07-25 12:13:34.935385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.661 [2024-07-25 12:13:34.935462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.935539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.935631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.935708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.935788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.935864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.935940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.935993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.936944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.936997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.937073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.937149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.937226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.937303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.937380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.937456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.937533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.937557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:34.938016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:34.938052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:50.932089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.661 [2024-07-25 12:13:50.932174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:50.932253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.661 [2024-07-25 12:13:50.932281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:50.932323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:50.932345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:50.932386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:50.932408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.661 [2024-07-25 12:13:50.932449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.661 [2024-07-25 12:13:50.932471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.932533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.932595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.932668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.932792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.932855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.932927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.932968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.932990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.933031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.933053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.933093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.933115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.933156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.933177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.933218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.933241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.935750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.935797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.935844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.935867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.935908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.935931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.935971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.935993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.936372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.936435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.936496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.936558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.936629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.936692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.936753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.936938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.936989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.937010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.937051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.662 [2024-07-25 12:13:50.937073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.937112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.937134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.937173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.937194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.937234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.937255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.937295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.937316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.937356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.662 [2024-07-25 12:13:50.937378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.662 [2024-07-25 12:13:50.937418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.937440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.937501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.937563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.937639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.937700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.937767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.937829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.937890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.937951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.937991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.938014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.938509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.938571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.663 [2024-07-25 12:13:50.938761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.938822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.663 [2024-07-25 12:13:50.938863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.663 [2024-07-25 12:13:50.938884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.663 Received shutdown signal, test time was about 33.042057 seconds 00:27:16.663 00:27:16.663 Latency(us) 00:27:16.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.663 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:16.663 Verification LBA range: start 0x0 length 0x4000 00:27:16.663 Nvme0n1 : 33.04 4585.52 17.91 0.00 0.00 27836.94 1251.14 4087539.90 00:27:16.663 =================================================================================================================== 00:27:16.663 Total : 4585.52 17.91 0.00 0.00 27836.94 1251.14 4087539.90 00:27:16.663 12:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:16.922 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.923 rmmod nvme_tcp 00:27:16.923 rmmod nvme_fabrics 00:27:16.923 rmmod nvme_keyring 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 63417 ']' 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 63417 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 63417 ']' 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 63417 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:16.923 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63417 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63417' 00:27:17.182 killing process with pid 63417 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 63417 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 63417 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.182 12:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.715 12:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.715 00:27:19.715 real 0m46.503s 00:27:19.715 user 2m12.085s 00:27:19.715 sys 0m11.388s 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:19.716 ************************************ 00:27:19.716 END TEST nvmf_host_multipath_status 00:27:19.716 ************************************ 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.716 ************************************ 00:27:19.716 START TEST nvmf_discovery_remove_ifc 00:27:19.716 ************************************ 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:19.716 * Looking for test storage... 00:27:19.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.716 12:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.987 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:24.988 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:24.988 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:24.988 Found net devices under 0000:af:00.0: cvl_0_0 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:24.988 Found net devices under 0000:af:00.1: cvl_0_1 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.988 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:25.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:25.248 00:27:25.248 --- 10.0.0.2 ping statistics --- 00:27:25.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.248 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:27:25.248 00:27:25.248 --- 10.0.0.1 ping statistics --- 00:27:25.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.248 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:25.248 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=73892 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 73892 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 73892 ']' 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:25.507 12:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.507 [2024-07-25 12:14:02.633339] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:27:25.507 [2024-07-25 12:14:02.633395] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.507 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.507 [2024-07-25 12:14:02.723547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.766 [2024-07-25 12:14:02.826156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.766 [2024-07-25 12:14:02.826204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.766 [2024-07-25 12:14:02.826217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.766 [2024-07-25 12:14:02.826229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.766 [2024-07-25 12:14:02.826239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.766 [2024-07-25 12:14:02.826266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.343 [2024-07-25 12:14:03.577756] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.343 [2024-07-25 12:14:03.585971] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:26.343 null0 00:27:26.343 [2024-07-25 12:14:03.617935] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.343 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=74044 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 74044 /tmp/host.sock 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 74044 ']' 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:26.604 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.604 [2024-07-25 12:14:03.692859] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:27:26.604 [2024-07-25 12:14:03.692919] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74044 ] 00:27:26.604 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.604 [2024-07-25 12:14:03.774006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.604 [2024-07-25 12:14:03.865431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.604 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.863 12:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.798 [2024-07-25 12:14:05.045517] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:27.798 [2024-07-25 12:14:05.045541] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:27.799 [2024-07-25 12:14:05.045559] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.058 [2024-07-25 12:14:05.174020] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:28.058 [2024-07-25 12:14:05.237623] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:28.058 [2024-07-25 12:14:05.237681] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:28.058 [2024-07-25 12:14:05.237709] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:28.058 [2024-07-25 12:14:05.237727] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:28.058 [2024-07-25 12:14:05.237751] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.058 [2024-07-25 12:14:05.244358] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18203b0 was disconnected and freed. delete nvme_qpair. 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:28.058 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:28.317 12:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.254 12:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.630 12:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:31.568 12:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.515 12:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.450 [2024-07-25 12:14:10.678535] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:33.450 [2024-07-25 12:14:10.678588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.450 [2024-07-25 12:14:10.678608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.450 [2024-07-25 12:14:10.678621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.450 [2024-07-25 12:14:10.678631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.450 [2024-07-25 12:14:10.678642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.450 [2024-07-25 12:14:10.678651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.450 [2024-07-25 12:14:10.678661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.450 [2024-07-25 12:14:10.678671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.450 [2024-07-25 12:14:10.678682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.450 [2024-07-25 12:14:10.678691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.450 [2024-07-25 12:14:10.678701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e6c20 is same with the state(5) to be set 00:27:33.450 [2024-07-25 12:14:10.688554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6c20 (9): Bad file descriptor 00:27:33.450 12:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.450 12:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.450 12:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.450 12:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.450 12:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.450 12:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.450 12:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.450 [2024-07-25 12:14:10.698598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.826 [2024-07-25 12:14:11.753675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:34.826 [2024-07-25 12:14:11.753768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e6c20 with addr=10.0.0.2, port=4420 00:27:34.826 [2024-07-25 12:14:11.753800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e6c20 is same with the state(5) to be set 00:27:34.826 [2024-07-25 12:14:11.753853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6c20 (9): Bad file descriptor 00:27:34.826 [2024-07-25 12:14:11.754811] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:34.826 [2024-07-25 12:14:11.754876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.826 [2024-07-25 12:14:11.754900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.826 [2024-07-25 12:14:11.754922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.826 [2024-07-25 12:14:11.754981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.826 [2024-07-25 12:14:11.755006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.826 12:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.826 12:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:34.826 12:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.759 [2024-07-25 12:14:12.757503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.760 [2024-07-25 12:14:12.757530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.760 [2024-07-25 12:14:12.757540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.760 [2024-07-25 12:14:12.757550] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:35.760 [2024-07-25 12:14:12.757565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.760 [2024-07-25 12:14:12.757588] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:35.760 [2024-07-25 12:14:12.757619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.760 [2024-07-25 12:14:12.757633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.760 [2024-07-25 12:14:12.757645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.760 [2024-07-25 12:14:12.757655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.760 [2024-07-25 12:14:12.757671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.760 [2024-07-25 12:14:12.757681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.760 [2024-07-25 12:14:12.757691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.760 [2024-07-25 12:14:12.757700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.760 [2024-07-25 12:14:12.757711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.760 [2024-07-25 12:14:12.757720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.760 [2024-07-25 12:14:12.757730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:35.760 [2024-07-25 12:14:12.757746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6080 (9): Bad file descriptor 00:27:35.760 [2024-07-25 12:14:12.758582] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:35.760 [2024-07-25 12:14:12.758596] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:35.760 12:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:36.694 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.694 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.694 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.694 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.694 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.694 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.694 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.952 12:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.952 12:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:36.953 12:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:37.519 [2024-07-25 12:14:14.812800] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:37.519 [2024-07-25 12:14:14.812822] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:37.519 [2024-07-25 12:14:14.812842] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:37.777 [2024-07-25 12:14:14.900154] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.777 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.035 [2024-07-25 12:14:15.083658] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:38.035 [2024-07-25 12:14:15.083705] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:38.035 [2024-07-25 12:14:15.083729] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:38.035 [2024-07-25 12:14:15.083747] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:38.035 [2024-07-25 12:14:15.083757] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:38.035 [2024-07-25 12:14:15.090632] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17eda10 was disconnected and freed. delete nvme_qpair. 00:27:38.035 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:38.035 12:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 74044 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 74044 ']' 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 74044 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74044 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74044' 00:27:38.967 killing process with pid 74044 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 74044 00:27:38.967 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 74044 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.225 rmmod nvme_tcp 00:27:39.225 rmmod nvme_fabrics 00:27:39.225 rmmod nvme_keyring 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 73892 ']' 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 73892 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 73892 ']' 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 73892 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73892 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73892' 00:27:39.225 killing process with pid 73892 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 73892 00:27:39.225 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 73892 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.790 12:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:41.691 00:27:41.691 real 0m22.265s 00:27:41.691 user 0m27.866s 00:27:41.691 sys 0m5.872s 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.691 ************************************ 00:27:41.691 END TEST nvmf_discovery_remove_ifc 00:27:41.691 ************************************ 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.691 ************************************ 00:27:41.691 START TEST nvmf_identify_kernel_target 00:27:41.691 ************************************ 00:27:41.691 12:14:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:41.950 * Looking for test storage... 00:27:41.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.950 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.951 12:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:48.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:48.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:48.521 Found net devices under 0000:af:00.0: cvl_0_0 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:48.521 Found net devices under 0000:af:00.1: cvl_0_1 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.521 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:48.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:27:48.522 00:27:48.522 --- 10.0.0.2 ping statistics --- 00:27:48.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.522 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:27:48.522 00:27:48.522 --- 10.0.0.1 ping statistics --- 00:27:48.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.522 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:48.522 12:14:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:50.429 Waiting for block devices as requested 00:27:50.429 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:50.687 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:50.687 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:50.687 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:50.945 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:50.945 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:50.945 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:51.204 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:51.204 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:51.204 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:51.204 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:51.462 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:51.462 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:51.462 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:51.721 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:51.721 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:51.721 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:51.721 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:51.981 No valid GPT data, bailing 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:51.981 00:27:51.981 Discovery Log Number of Records 2, Generation counter 2 00:27:51.981 =====Discovery Log Entry 0====== 00:27:51.981 trtype: tcp 00:27:51.981 adrfam: ipv4 00:27:51.981 subtype: current discovery subsystem 00:27:51.981 treq: not specified, sq flow control disable supported 00:27:51.981 portid: 1 00:27:51.981 trsvcid: 4420 00:27:51.981 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:51.981 traddr: 10.0.0.1 00:27:51.981 eflags: none 00:27:51.981 sectype: none 00:27:51.981 =====Discovery Log Entry 1====== 00:27:51.981 trtype: tcp 00:27:51.981 adrfam: ipv4 00:27:51.981 subtype: nvme subsystem 00:27:51.981 treq: not specified, sq flow control disable supported 00:27:51.981 portid: 1 00:27:51.981 trsvcid: 4420 00:27:51.981 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:51.981 traddr: 10.0.0.1 00:27:51.981 eflags: none 00:27:51.981 sectype: none 00:27:51.981 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:51.981 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:51.981 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.981 ===================================================== 00:27:51.981 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:51.981 ===================================================== 00:27:51.981 Controller Capabilities/Features 00:27:51.981 ================================ 00:27:51.981 Vendor ID: 0000 00:27:51.981 Subsystem Vendor ID: 0000 00:27:51.981 Serial Number: 1a9301e01c7973a524f0 00:27:51.981 Model Number: Linux 00:27:51.981 Firmware Version: 6.7.0-68 00:27:51.981 Recommended Arb Burst: 0 00:27:51.981 IEEE OUI Identifier: 00 00 00 00:27:51.981 Multi-path I/O 00:27:51.981 May have multiple subsystem ports: No 00:27:51.981 May have multiple controllers: No 00:27:51.981 Associated with SR-IOV VF: No 00:27:51.981 Max Data Transfer Size: Unlimited 00:27:51.981 Max Number of Namespaces: 0 00:27:51.981 Max Number of I/O Queues: 1024 00:27:51.981 NVMe Specification Version (VS): 1.3 00:27:51.981 NVMe Specification Version (Identify): 1.3 00:27:51.981 Maximum Queue Entries: 1024 00:27:51.981 Contiguous Queues Required: No 00:27:51.981 Arbitration Mechanisms Supported 00:27:51.981 Weighted Round Robin: Not Supported 00:27:51.981 Vendor Specific: Not Supported 00:27:51.981 Reset Timeout: 7500 ms 00:27:51.981 Doorbell Stride: 4 bytes 00:27:51.981 NVM Subsystem Reset: Not Supported 00:27:51.981 Command Sets Supported 00:27:51.981 NVM Command Set: Supported 00:27:51.981 Boot Partition: Not Supported 00:27:51.981 Memory Page Size Minimum: 4096 bytes 00:27:51.981 Memory Page Size Maximum: 4096 bytes 00:27:51.981 Persistent Memory Region: Not Supported 00:27:51.981 Optional Asynchronous Events Supported 00:27:51.981 Namespace Attribute Notices: Not Supported 00:27:51.981 Firmware Activation Notices: Not Supported 00:27:51.981 ANA Change Notices: Not Supported 00:27:51.981 PLE Aggregate Log Change Notices: Not Supported 00:27:51.981 LBA Status Info Alert Notices: Not Supported 00:27:51.981 EGE Aggregate Log Change Notices: Not Supported 00:27:51.981 Normal NVM Subsystem Shutdown event: Not Supported 00:27:51.981 Zone Descriptor Change Notices: Not Supported 00:27:51.981 Discovery Log Change Notices: Supported 00:27:51.981 Controller Attributes 00:27:51.981 128-bit Host Identifier: Not Supported 00:27:51.981 Non-Operational Permissive Mode: Not Supported 00:27:51.982 NVM Sets: Not Supported 00:27:51.982 Read Recovery Levels: Not Supported 00:27:51.982 Endurance Groups: Not Supported 00:27:51.982 Predictable Latency Mode: Not Supported 00:27:51.982 Traffic Based Keep ALive: Not Supported 00:27:51.982 Namespace Granularity: Not Supported 00:27:51.982 SQ Associations: Not Supported 00:27:51.982 UUID List: Not Supported 00:27:51.982 Multi-Domain Subsystem: Not Supported 00:27:51.982 Fixed Capacity Management: Not Supported 00:27:51.982 Variable Capacity Management: Not Supported 00:27:51.982 Delete Endurance Group: Not Supported 00:27:51.982 Delete NVM Set: Not Supported 00:27:51.982 Extended LBA Formats Supported: Not Supported 00:27:51.982 Flexible Data Placement Supported: Not Supported 00:27:51.982 00:27:51.982 Controller Memory Buffer Support 00:27:51.982 ================================ 00:27:51.982 Supported: No 00:27:51.982 00:27:51.982 Persistent Memory Region Support 00:27:51.982 ================================ 00:27:51.982 Supported: No 00:27:51.982 00:27:51.982 Admin Command Set Attributes 00:27:51.982 ============================ 00:27:51.982 Security Send/Receive: Not Supported 00:27:51.982 Format NVM: Not Supported 00:27:51.982 Firmware Activate/Download: Not Supported 00:27:51.982 Namespace Management: Not Supported 00:27:51.982 Device Self-Test: Not Supported 00:27:51.982 Directives: Not Supported 00:27:51.982 NVMe-MI: Not Supported 00:27:51.982 Virtualization Management: Not Supported 00:27:51.982 Doorbell Buffer Config: Not Supported 00:27:51.982 Get LBA Status Capability: Not Supported 00:27:51.982 Command & Feature Lockdown Capability: Not Supported 00:27:51.982 Abort Command Limit: 1 00:27:51.982 Async Event Request Limit: 1 00:27:51.982 Number of Firmware Slots: N/A 00:27:51.982 Firmware Slot 1 Read-Only: N/A 00:27:52.242 Firmware Activation Without Reset: N/A 00:27:52.242 Multiple Update Detection Support: N/A 00:27:52.242 Firmware Update Granularity: No Information Provided 00:27:52.242 Per-Namespace SMART Log: No 00:27:52.242 Asymmetric Namespace Access Log Page: Not Supported 00:27:52.242 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:52.242 Command Effects Log Page: Not Supported 00:27:52.242 Get Log Page Extended Data: Supported 00:27:52.242 Telemetry Log Pages: Not Supported 00:27:52.242 Persistent Event Log Pages: Not Supported 00:27:52.242 Supported Log Pages Log Page: May Support 00:27:52.242 Commands Supported & Effects Log Page: Not Supported 00:27:52.242 Feature Identifiers & Effects Log Page:May Support 00:27:52.242 NVMe-MI Commands & Effects Log Page: May Support 00:27:52.242 Data Area 4 for Telemetry Log: Not Supported 00:27:52.243 Error Log Page Entries Supported: 1 00:27:52.243 Keep Alive: Not Supported 00:27:52.243 00:27:52.243 NVM Command Set Attributes 00:27:52.243 ========================== 00:27:52.243 Submission Queue Entry Size 00:27:52.243 Max: 1 00:27:52.243 Min: 1 00:27:52.243 Completion Queue Entry Size 00:27:52.243 Max: 1 00:27:52.243 Min: 1 00:27:52.243 Number of Namespaces: 0 00:27:52.243 Compare Command: Not Supported 00:27:52.243 Write Uncorrectable Command: Not Supported 00:27:52.243 Dataset Management Command: Not Supported 00:27:52.243 Write Zeroes Command: Not Supported 00:27:52.243 Set Features Save Field: Not Supported 00:27:52.243 Reservations: Not Supported 00:27:52.243 Timestamp: Not Supported 00:27:52.243 Copy: Not Supported 00:27:52.243 Volatile Write Cache: Not Present 00:27:52.243 Atomic Write Unit (Normal): 1 00:27:52.243 Atomic Write Unit (PFail): 1 00:27:52.243 Atomic Compare & Write Unit: 1 00:27:52.243 Fused Compare & Write: Not Supported 00:27:52.243 Scatter-Gather List 00:27:52.243 SGL Command Set: Supported 00:27:52.243 SGL Keyed: Not Supported 00:27:52.243 SGL Bit Bucket Descriptor: Not Supported 00:27:52.243 SGL Metadata Pointer: Not Supported 00:27:52.243 Oversized SGL: Not Supported 00:27:52.243 SGL Metadata Address: Not Supported 00:27:52.243 SGL Offset: Supported 00:27:52.243 Transport SGL Data Block: Not Supported 00:27:52.243 Replay Protected Memory Block: Not Supported 00:27:52.243 00:27:52.243 Firmware Slot Information 00:27:52.243 ========================= 00:27:52.243 Active slot: 0 00:27:52.243 00:27:52.243 00:27:52.243 Error Log 00:27:52.243 ========= 00:27:52.243 00:27:52.243 Active Namespaces 00:27:52.243 ================= 00:27:52.243 Discovery Log Page 00:27:52.243 ================== 00:27:52.243 Generation Counter: 2 00:27:52.243 Number of Records: 2 00:27:52.243 Record Format: 0 00:27:52.243 00:27:52.243 Discovery Log Entry 0 00:27:52.243 ---------------------- 00:27:52.243 Transport Type: 3 (TCP) 00:27:52.243 Address Family: 1 (IPv4) 00:27:52.243 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:52.243 Entry Flags: 00:27:52.243 Duplicate Returned Information: 0 00:27:52.243 Explicit Persistent Connection Support for Discovery: 0 00:27:52.243 Transport Requirements: 00:27:52.243 Secure Channel: Not Specified 00:27:52.243 Port ID: 1 (0x0001) 00:27:52.243 Controller ID: 65535 (0xffff) 00:27:52.243 Admin Max SQ Size: 32 00:27:52.243 Transport Service Identifier: 4420 00:27:52.243 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:52.243 Transport Address: 10.0.0.1 00:27:52.243 Discovery Log Entry 1 00:27:52.243 ---------------------- 00:27:52.243 Transport Type: 3 (TCP) 00:27:52.243 Address Family: 1 (IPv4) 00:27:52.243 Subsystem Type: 2 (NVM Subsystem) 00:27:52.243 Entry Flags: 00:27:52.243 Duplicate Returned Information: 0 00:27:52.243 Explicit Persistent Connection Support for Discovery: 0 00:27:52.243 Transport Requirements: 00:27:52.243 Secure Channel: Not Specified 00:27:52.243 Port ID: 1 (0x0001) 00:27:52.243 Controller ID: 65535 (0xffff) 00:27:52.243 Admin Max SQ Size: 32 00:27:52.243 Transport Service Identifier: 4420 00:27:52.243 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:52.243 Transport Address: 10.0.0.1 00:27:52.243 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:52.243 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.243 get_feature(0x01) failed 00:27:52.243 get_feature(0x02) failed 00:27:52.243 get_feature(0x04) failed 00:27:52.243 ===================================================== 00:27:52.243 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:52.243 ===================================================== 00:27:52.243 Controller Capabilities/Features 00:27:52.243 ================================ 00:27:52.243 Vendor ID: 0000 00:27:52.243 Subsystem Vendor ID: 0000 00:27:52.243 Serial Number: b38b07f5fea1148cf1c8 00:27:52.243 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:52.243 Firmware Version: 6.7.0-68 00:27:52.243 Recommended Arb Burst: 6 00:27:52.243 IEEE OUI Identifier: 00 00 00 00:27:52.243 Multi-path I/O 00:27:52.243 May have multiple subsystem ports: Yes 00:27:52.243 May have multiple controllers: Yes 00:27:52.243 Associated with SR-IOV VF: No 00:27:52.243 Max Data Transfer Size: Unlimited 00:27:52.243 Max Number of Namespaces: 1024 00:27:52.243 Max Number of I/O Queues: 128 00:27:52.243 NVMe Specification Version (VS): 1.3 00:27:52.243 NVMe Specification Version (Identify): 1.3 00:27:52.243 Maximum Queue Entries: 1024 00:27:52.243 Contiguous Queues Required: No 00:27:52.243 Arbitration Mechanisms Supported 00:27:52.243 Weighted Round Robin: Not Supported 00:27:52.243 Vendor Specific: Not Supported 00:27:52.243 Reset Timeout: 7500 ms 00:27:52.243 Doorbell Stride: 4 bytes 00:27:52.243 NVM Subsystem Reset: Not Supported 00:27:52.243 Command Sets Supported 00:27:52.243 NVM Command Set: Supported 00:27:52.243 Boot Partition: Not Supported 00:27:52.243 Memory Page Size Minimum: 4096 bytes 00:27:52.243 Memory Page Size Maximum: 4096 bytes 00:27:52.243 Persistent Memory Region: Not Supported 00:27:52.243 Optional Asynchronous Events Supported 00:27:52.243 Namespace Attribute Notices: Supported 00:27:52.243 Firmware Activation Notices: Not Supported 00:27:52.243 ANA Change Notices: Supported 00:27:52.243 PLE Aggregate Log Change Notices: Not Supported 00:27:52.243 LBA Status Info Alert Notices: Not Supported 00:27:52.243 EGE Aggregate Log Change Notices: Not Supported 00:27:52.243 Normal NVM Subsystem Shutdown event: Not Supported 00:27:52.243 Zone Descriptor Change Notices: Not Supported 00:27:52.243 Discovery Log Change Notices: Not Supported 00:27:52.243 Controller Attributes 00:27:52.243 128-bit Host Identifier: Supported 00:27:52.243 Non-Operational Permissive Mode: Not Supported 00:27:52.243 NVM Sets: Not Supported 00:27:52.243 Read Recovery Levels: Not Supported 00:27:52.243 Endurance Groups: Not Supported 00:27:52.243 Predictable Latency Mode: Not Supported 00:27:52.243 Traffic Based Keep ALive: Supported 00:27:52.243 Namespace Granularity: Not Supported 00:27:52.243 SQ Associations: Not Supported 00:27:52.243 UUID List: Not Supported 00:27:52.243 Multi-Domain Subsystem: Not Supported 00:27:52.243 Fixed Capacity Management: Not Supported 00:27:52.243 Variable Capacity Management: Not Supported 00:27:52.243 Delete Endurance Group: Not Supported 00:27:52.243 Delete NVM Set: Not Supported 00:27:52.243 Extended LBA Formats Supported: Not Supported 00:27:52.243 Flexible Data Placement Supported: Not Supported 00:27:52.243 00:27:52.243 Controller Memory Buffer Support 00:27:52.243 ================================ 00:27:52.243 Supported: No 00:27:52.243 00:27:52.243 Persistent Memory Region Support 00:27:52.243 ================================ 00:27:52.243 Supported: No 00:27:52.243 00:27:52.243 Admin Command Set Attributes 00:27:52.243 ============================ 00:27:52.243 Security Send/Receive: Not Supported 00:27:52.243 Format NVM: Not Supported 00:27:52.243 Firmware Activate/Download: Not Supported 00:27:52.243 Namespace Management: Not Supported 00:27:52.243 Device Self-Test: Not Supported 00:27:52.243 Directives: Not Supported 00:27:52.243 NVMe-MI: Not Supported 00:27:52.243 Virtualization Management: Not Supported 00:27:52.243 Doorbell Buffer Config: Not Supported 00:27:52.243 Get LBA Status Capability: Not Supported 00:27:52.243 Command & Feature Lockdown Capability: Not Supported 00:27:52.243 Abort Command Limit: 4 00:27:52.243 Async Event Request Limit: 4 00:27:52.243 Number of Firmware Slots: N/A 00:27:52.243 Firmware Slot 1 Read-Only: N/A 00:27:52.243 Firmware Activation Without Reset: N/A 00:27:52.243 Multiple Update Detection Support: N/A 00:27:52.243 Firmware Update Granularity: No Information Provided 00:27:52.243 Per-Namespace SMART Log: Yes 00:27:52.243 Asymmetric Namespace Access Log Page: Supported 00:27:52.243 ANA Transition Time : 10 sec 00:27:52.243 00:27:52.243 Asymmetric Namespace Access Capabilities 00:27:52.243 ANA Optimized State : Supported 00:27:52.243 ANA Non-Optimized State : Supported 00:27:52.243 ANA Inaccessible State : Supported 00:27:52.243 ANA Persistent Loss State : Supported 00:27:52.243 ANA Change State : Supported 00:27:52.243 ANAGRPID is not changed : No 00:27:52.243 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:52.243 00:27:52.243 ANA Group Identifier Maximum : 128 00:27:52.243 Number of ANA Group Identifiers : 128 00:27:52.244 Max Number of Allowed Namespaces : 1024 00:27:52.244 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:52.244 Command Effects Log Page: Supported 00:27:52.244 Get Log Page Extended Data: Supported 00:27:52.244 Telemetry Log Pages: Not Supported 00:27:52.244 Persistent Event Log Pages: Not Supported 00:27:52.244 Supported Log Pages Log Page: May Support 00:27:52.244 Commands Supported & Effects Log Page: Not Supported 00:27:52.244 Feature Identifiers & Effects Log Page:May Support 00:27:52.244 NVMe-MI Commands & Effects Log Page: May Support 00:27:52.244 Data Area 4 for Telemetry Log: Not Supported 00:27:52.244 Error Log Page Entries Supported: 128 00:27:52.244 Keep Alive: Supported 00:27:52.244 Keep Alive Granularity: 1000 ms 00:27:52.244 00:27:52.244 NVM Command Set Attributes 00:27:52.244 ========================== 00:27:52.244 Submission Queue Entry Size 00:27:52.244 Max: 64 00:27:52.244 Min: 64 00:27:52.244 Completion Queue Entry Size 00:27:52.244 Max: 16 00:27:52.244 Min: 16 00:27:52.244 Number of Namespaces: 1024 00:27:52.244 Compare Command: Not Supported 00:27:52.244 Write Uncorrectable Command: Not Supported 00:27:52.244 Dataset Management Command: Supported 00:27:52.244 Write Zeroes Command: Supported 00:27:52.244 Set Features Save Field: Not Supported 00:27:52.244 Reservations: Not Supported 00:27:52.244 Timestamp: Not Supported 00:27:52.244 Copy: Not Supported 00:27:52.244 Volatile Write Cache: Present 00:27:52.244 Atomic Write Unit (Normal): 1 00:27:52.244 Atomic Write Unit (PFail): 1 00:27:52.244 Atomic Compare & Write Unit: 1 00:27:52.244 Fused Compare & Write: Not Supported 00:27:52.244 Scatter-Gather List 00:27:52.244 SGL Command Set: Supported 00:27:52.244 SGL Keyed: Not Supported 00:27:52.244 SGL Bit Bucket Descriptor: Not Supported 00:27:52.244 SGL Metadata Pointer: Not Supported 00:27:52.244 Oversized SGL: Not Supported 00:27:52.244 SGL Metadata Address: Not Supported 00:27:52.244 SGL Offset: Supported 00:27:52.244 Transport SGL Data Block: Not Supported 00:27:52.244 Replay Protected Memory Block: Not Supported 00:27:52.244 00:27:52.244 Firmware Slot Information 00:27:52.244 ========================= 00:27:52.244 Active slot: 0 00:27:52.244 00:27:52.244 Asymmetric Namespace Access 00:27:52.244 =========================== 00:27:52.244 Change Count : 0 00:27:52.244 Number of ANA Group Descriptors : 1 00:27:52.244 ANA Group Descriptor : 0 00:27:52.244 ANA Group ID : 1 00:27:52.244 Number of NSID Values : 1 00:27:52.244 Change Count : 0 00:27:52.244 ANA State : 1 00:27:52.244 Namespace Identifier : 1 00:27:52.244 00:27:52.244 Commands Supported and Effects 00:27:52.244 ============================== 00:27:52.244 Admin Commands 00:27:52.244 -------------- 00:27:52.244 Get Log Page (02h): Supported 00:27:52.244 Identify (06h): Supported 00:27:52.244 Abort (08h): Supported 00:27:52.244 Set Features (09h): Supported 00:27:52.244 Get Features (0Ah): Supported 00:27:52.244 Asynchronous Event Request (0Ch): Supported 00:27:52.244 Keep Alive (18h): Supported 00:27:52.244 I/O Commands 00:27:52.244 ------------ 00:27:52.244 Flush (00h): Supported 00:27:52.244 Write (01h): Supported LBA-Change 00:27:52.244 Read (02h): Supported 00:27:52.244 Write Zeroes (08h): Supported LBA-Change 00:27:52.244 Dataset Management (09h): Supported 00:27:52.244 00:27:52.244 Error Log 00:27:52.244 ========= 00:27:52.244 Entry: 0 00:27:52.244 Error Count: 0x3 00:27:52.244 Submission Queue Id: 0x0 00:27:52.244 Command Id: 0x5 00:27:52.244 Phase Bit: 0 00:27:52.244 Status Code: 0x2 00:27:52.244 Status Code Type: 0x0 00:27:52.244 Do Not Retry: 1 00:27:52.244 Error Location: 0x28 00:27:52.244 LBA: 0x0 00:27:52.244 Namespace: 0x0 00:27:52.244 Vendor Log Page: 0x0 00:27:52.244 ----------- 00:27:52.244 Entry: 1 00:27:52.244 Error Count: 0x2 00:27:52.244 Submission Queue Id: 0x0 00:27:52.244 Command Id: 0x5 00:27:52.244 Phase Bit: 0 00:27:52.244 Status Code: 0x2 00:27:52.244 Status Code Type: 0x0 00:27:52.244 Do Not Retry: 1 00:27:52.244 Error Location: 0x28 00:27:52.244 LBA: 0x0 00:27:52.244 Namespace: 0x0 00:27:52.244 Vendor Log Page: 0x0 00:27:52.244 ----------- 00:27:52.244 Entry: 2 00:27:52.244 Error Count: 0x1 00:27:52.244 Submission Queue Id: 0x0 00:27:52.244 Command Id: 0x4 00:27:52.244 Phase Bit: 0 00:27:52.244 Status Code: 0x2 00:27:52.244 Status Code Type: 0x0 00:27:52.244 Do Not Retry: 1 00:27:52.244 Error Location: 0x28 00:27:52.244 LBA: 0x0 00:27:52.244 Namespace: 0x0 00:27:52.244 Vendor Log Page: 0x0 00:27:52.244 00:27:52.244 Number of Queues 00:27:52.244 ================ 00:27:52.244 Number of I/O Submission Queues: 128 00:27:52.244 Number of I/O Completion Queues: 128 00:27:52.244 00:27:52.244 ZNS Specific Controller Data 00:27:52.244 ============================ 00:27:52.244 Zone Append Size Limit: 0 00:27:52.244 00:27:52.244 00:27:52.244 Active Namespaces 00:27:52.244 ================= 00:27:52.244 get_feature(0x05) failed 00:27:52.244 Namespace ID:1 00:27:52.244 Command Set Identifier: NVM (00h) 00:27:52.244 Deallocate: Supported 00:27:52.244 Deallocated/Unwritten Error: Not Supported 00:27:52.244 Deallocated Read Value: Unknown 00:27:52.244 Deallocate in Write Zeroes: Not Supported 00:27:52.244 Deallocated Guard Field: 0xFFFF 00:27:52.244 Flush: Supported 00:27:52.244 Reservation: Not Supported 00:27:52.244 Namespace Sharing Capabilities: Multiple Controllers 00:27:52.244 Size (in LBAs): 1953525168 (931GiB) 00:27:52.244 Capacity (in LBAs): 1953525168 (931GiB) 00:27:52.244 Utilization (in LBAs): 1953525168 (931GiB) 00:27:52.244 UUID: 4b863204-66ff-4f0e-a605-be83fc08a234 00:27:52.244 Thin Provisioning: Not Supported 00:27:52.244 Per-NS Atomic Units: Yes 00:27:52.244 Atomic Boundary Size (Normal): 0 00:27:52.244 Atomic Boundary Size (PFail): 0 00:27:52.244 Atomic Boundary Offset: 0 00:27:52.244 NGUID/EUI64 Never Reused: No 00:27:52.244 ANA group ID: 1 00:27:52.244 Namespace Write Protected: No 00:27:52.244 Number of LBA Formats: 1 00:27:52.244 Current LBA Format: LBA Format #00 00:27:52.244 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:52.244 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.244 rmmod nvme_tcp 00:27:52.244 rmmod nvme_fabrics 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.244 12:14:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:54.778 12:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.314 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:57.314 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:58.253 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:58.253 00:27:58.253 real 0m16.577s 00:27:58.253 user 0m4.145s 00:27:58.253 sys 0m8.652s 00:27:58.253 12:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.253 12:14:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.253 ************************************ 00:27:58.253 END TEST nvmf_identify_kernel_target 00:27:58.253 ************************************ 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.512 ************************************ 00:27:58.512 START TEST nvmf_auth_host 00:27:58.512 ************************************ 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.512 * Looking for test storage... 00:27:58.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.512 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.513 12:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:05.080 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:05.080 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:05.080 Found net devices under 0000:af:00.0: cvl_0_0 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:05.080 Found net devices under 0000:af:00.1: cvl_0_1 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.080 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:28:05.081 00:28:05.081 --- 10.0.0.2 ping statistics --- 00:28:05.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.081 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:28:05.081 00:28:05.081 --- 10.0.0.1 ping statistics --- 00:28:05.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.081 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=86504 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 86504 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 86504 ']' 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:05.081 12:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c6f402624a3494794c85e533002d4683 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xy0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c6f402624a3494794c85e533002d4683 0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c6f402624a3494794c85e533002d4683 0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c6f402624a3494794c85e533002d4683 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xy0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xy0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.xy0 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:05.340 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:05.599 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8cf3f2548fcdf9a628b3b95721dbf554250d53343b19be70cab0dd4cc9630ed6 00:28:05.599 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iDw 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8cf3f2548fcdf9a628b3b95721dbf554250d53343b19be70cab0dd4cc9630ed6 3 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8cf3f2548fcdf9a628b3b95721dbf554250d53343b19be70cab0dd4cc9630ed6 3 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8cf3f2548fcdf9a628b3b95721dbf554250d53343b19be70cab0dd4cc9630ed6 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iDw 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iDw 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.iDw 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6302e5b3e9eecb1e7259e86cb1493bfeb909cc022fbefab3 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eF9 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6302e5b3e9eecb1e7259e86cb1493bfeb909cc022fbefab3 0 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6302e5b3e9eecb1e7259e86cb1493bfeb909cc022fbefab3 0 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6302e5b3e9eecb1e7259e86cb1493bfeb909cc022fbefab3 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eF9 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eF9 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eF9 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=70a91a1b787ae6fac98b5be40cf6969d3910df3933974d97 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lTV 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 70a91a1b787ae6fac98b5be40cf6969d3910df3933974d97 2 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 70a91a1b787ae6fac98b5be40cf6969d3910df3933974d97 2 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=70a91a1b787ae6fac98b5be40cf6969d3910df3933974d97 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lTV 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lTV 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.lTV 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a86c0faf58a46d2d494121655bd8c865 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7K4 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a86c0faf58a46d2d494121655bd8c865 1 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a86c0faf58a46d2d494121655bd8c865 1 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a86c0faf58a46d2d494121655bd8c865 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:05.600 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7K4 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7K4 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7K4 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b2205d4c0a54da0a73c83ef79cf08ca8 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mVy 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b2205d4c0a54da0a73c83ef79cf08ca8 1 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b2205d4c0a54da0a73c83ef79cf08ca8 1 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b2205d4c0a54da0a73c83ef79cf08ca8 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mVy 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mVy 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mVy 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bc217ff92454d67277708238bd149ba16548ba1b5143b893 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IwF 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bc217ff92454d67277708238bd149ba16548ba1b5143b893 2 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bc217ff92454d67277708238bd149ba16548ba1b5143b893 2 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bc217ff92454d67277708238bd149ba16548ba1b5143b893 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:05.860 12:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IwF 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IwF 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.IwF 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=462235276590e5de4da0c6b974d3d7d7 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eHq 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 462235276590e5de4da0c6b974d3d7d7 0 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 462235276590e5de4da0c6b974d3d7d7 0 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=462235276590e5de4da0c6b974d3d7d7 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eHq 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eHq 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.eHq 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b331956c1e55812b2ed5b027fdf683fab57fc3b0e24817a0a27bccbbad31d38 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:05.860 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xfr 00:28:05.861 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b331956c1e55812b2ed5b027fdf683fab57fc3b0e24817a0a27bccbbad31d38 3 00:28:05.861 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b331956c1e55812b2ed5b027fdf683fab57fc3b0e24817a0a27bccbbad31d38 3 00:28:05.861 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.861 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.861 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b331956c1e55812b2ed5b027fdf683fab57fc3b0e24817a0a27bccbbad31d38 00:28:05.861 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:05.861 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xfr 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xfr 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xfr 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 86504 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 86504 ']' 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.119 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xy0 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.iDw ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iDw 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eF9 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.lTV ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lTV 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7K4 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mVy ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mVy 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.IwF 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.eHq ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.eHq 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.377 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xfr 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:06.378 12:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:08.946 Waiting for block devices as requested 00:28:09.204 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:28:09.204 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:09.204 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:09.463 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:09.463 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:09.463 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:09.463 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:09.721 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:09.721 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:09.721 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:09.979 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:09.979 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:09.979 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:09.979 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:10.238 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:10.238 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:10.238 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:11.174 No valid GPT data, bailing 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:11.174 00:28:11.174 Discovery Log Number of Records 2, Generation counter 2 00:28:11.174 =====Discovery Log Entry 0====== 00:28:11.174 trtype: tcp 00:28:11.174 adrfam: ipv4 00:28:11.174 subtype: current discovery subsystem 00:28:11.174 treq: not specified, sq flow control disable supported 00:28:11.174 portid: 1 00:28:11.174 trsvcid: 4420 00:28:11.174 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:11.174 traddr: 10.0.0.1 00:28:11.174 eflags: none 00:28:11.174 sectype: none 00:28:11.174 =====Discovery Log Entry 1====== 00:28:11.174 trtype: tcp 00:28:11.174 adrfam: ipv4 00:28:11.174 subtype: nvme subsystem 00:28:11.174 treq: not specified, sq flow control disable supported 00:28:11.174 portid: 1 00:28:11.174 trsvcid: 4420 00:28:11.174 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:11.174 traddr: 10.0.0.1 00:28:11.174 eflags: none 00:28:11.174 sectype: none 00:28:11.174 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.175 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 nvme0n1 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.433 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.434 nvme0n1 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.434 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.691 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 nvme0n1 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.692 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.950 12:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.950 nvme0n1 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.950 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.208 nvme0n1 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.208 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.209 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.209 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.209 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.209 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 nvme0n1 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.467 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.726 nvme0n1 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.726 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.727 12:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.986 nvme0n1 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.986 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.245 nvme0n1 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.245 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.503 nvme0n1 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:13.503 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.762 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.763 12:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.763 nvme0n1 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.763 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.022 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.281 nvme0n1 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.281 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.540 nvme0n1 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.540 12:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.799 nvme0n1 00:28:14.799 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.799 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.058 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.316 nvme0n1 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.316 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.317 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.317 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.317 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.575 nvme0n1 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.575 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.834 12:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.092 nvme0n1 00:28:16.092 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.092 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.092 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.092 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.092 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.092 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.350 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.608 nvme0n1 00:28:16.608 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.608 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.608 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.608 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.608 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.608 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.866 12:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.433 nvme0n1 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.433 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.434 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.000 nvme0n1 00:28:18.000 12:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.000 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.001 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 nvme0n1 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.569 12:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.136 nvme0n1 00:28:19.136 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.136 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.136 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.136 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.136 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.136 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.395 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.396 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.396 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.396 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.396 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.396 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.396 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.396 12:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.962 nvme0n1 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.962 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.223 12:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.789 nvme0n1 00:28:20.789 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.789 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.789 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.789 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.789 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.047 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.047 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.047 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.047 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.047 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.047 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.047 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.048 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.983 nvme0n1 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.983 12:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.983 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.550 nvme0n1 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.550 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.809 12:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.809 nvme0n1 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.809 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.068 nvme0n1 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.068 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.069 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.329 nvme0n1 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:23.329 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.330 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 nvme0n1 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 nvme0n1 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 12:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.106 nvme0n1 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.106 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.365 nvme0n1 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.365 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.623 nvme0n1 00:28:24.623 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.623 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.623 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.623 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.623 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.623 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.623 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.624 12:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.883 nvme0n1 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.883 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.143 nvme0n1 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.143 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.402 nvme0n1 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.402 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.660 12:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 nvme0n1 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.919 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.920 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.920 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.920 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 nvme0n1 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.179 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 nvme0n1 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.747 12:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.007 nvme0n1 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.007 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.577 nvme0n1 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.577 12:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.146 nvme0n1 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.146 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.147 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.147 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.147 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.147 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.147 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.147 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.147 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.713 nvme0n1 00:28:28.713 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.714 12:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 nvme0n1 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.282 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.283 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.283 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.283 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.283 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.283 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.540 nvme0n1 00:28:29.540 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.540 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.540 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.540 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.540 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.540 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.799 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.800 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.800 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.800 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.800 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.800 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.800 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.800 12:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.367 nvme0n1 00:28:30.367 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.367 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.367 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.367 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.367 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.367 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.626 12:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.562 nvme0n1 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.562 12:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.132 nvme0n1 00:28:32.132 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.132 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.132 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.132 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.132 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.132 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.390 12:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 nvme0n1 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.326 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.327 12:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.893 nvme0n1 00:28:33.893 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.893 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.893 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.894 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.153 nvme0n1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.153 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.412 nvme0n1 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.412 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.670 nvme0n1 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.671 12:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.930 nvme0n1 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.930 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.189 nvme0n1 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.189 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.448 nvme0n1 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.448 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.449 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.449 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.449 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 nvme0n1 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.708 12:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.967 nvme0n1 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.967 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.234 nvme0n1 00:28:36.234 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.234 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.234 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.234 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.234 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.235 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.493 nvme0n1 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.493 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.494 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.751 nvme0n1 00:28:36.751 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.751 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.751 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.751 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.751 12:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.751 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.751 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.751 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.751 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.751 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.023 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.024 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.024 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.024 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.282 nvme0n1 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.282 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.542 nvme0n1 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.542 12:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.800 nvme0n1 00:28:37.800 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.800 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.800 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.800 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.800 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.800 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.058 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.317 nvme0n1 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.317 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.885 nvme0n1 00:28:38.885 12:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.885 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.452 nvme0n1 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.452 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.453 12:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.021 nvme0n1 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.021 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.589 nvme0n1 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.589 12:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.157 nvme0n1 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzZmNDAyNjI0YTM0OTQ3OTRjODVlNTMzMDAyZDQ2ODOFr9KX: 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGNmM2YyNTQ4ZmNkZjlhNjI4YjNiOTU3MjFkYmY1NTQyNTBkNTMzNDNiMTliZTcwY2FiMGRkNGNjOTYzMGVkNlDqZ1k=: 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.157 12:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.094 nvme0n1 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.094 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.661 nvme0n1 00:28:42.661 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.919 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.919 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.919 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.920 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.920 12:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg2YzBmYWY1OGE0NmQyZDQ5NDEyMTY1NWJkOGM4NjU+68H0: 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIyMDVkNGMwYTU0ZGEwYTczYzgzZWY3OWNmMDhjYTj4baJ4: 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.920 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.854 nvme0n1 00:28:43.854 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.854 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.854 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmMyMTdmZjkyNDU0ZDY3Mjc3NzA4MjM4YmQxNDliYTE2NTQ4YmExYjUxNDNiODkzHF/KLQ==: 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDYyMjM1Mjc2NTkwZTVkZTRkYTBjNmI5NzRkM2Q3ZDeOc/MM: 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.855 12:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.453 nvme0n1 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmIzMzE5NTZjMWU1NTgxMmIyZWQ1YjAyN2ZkZjY4M2ZhYjU3ZmMzYjBlMjQ4MTdhMGEyN2JjY2JiYWQzMWQzOE9L+rY=: 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.453 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.711 12:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.279 nvme0n1 00:28:45.279 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.279 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.279 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.279 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.279 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.279 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjMwMmU1YjNlOWVlY2IxZTcyNTllODZjYjE0OTNiZmViOTA5Y2MwMjJmYmVmYWIzNOxOWQ==: 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBhOTFhMWI3ODdhZTZmYWM5OGI1YmU0MGNmNjk2OWQzOTEwZGYzOTMzOTc0ZDk3lUqlFw==: 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.538 request: 00:28:45.538 { 00:28:45.538 "name": "nvme0", 00:28:45.538 "trtype": "tcp", 00:28:45.538 "traddr": "10.0.0.1", 00:28:45.538 "adrfam": "ipv4", 00:28:45.538 "trsvcid": "4420", 00:28:45.538 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:45.538 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:45.538 "prchk_reftag": false, 00:28:45.538 "prchk_guard": false, 00:28:45.538 "hdgst": false, 00:28:45.538 "ddgst": false, 00:28:45.538 "method": "bdev_nvme_attach_controller", 00:28:45.538 "req_id": 1 00:28:45.538 } 00:28:45.538 Got JSON-RPC error response 00:28:45.538 response: 00:28:45.538 { 00:28:45.538 "code": -5, 00:28:45.538 "message": "Input/output error" 00:28:45.538 } 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.538 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.539 request: 00:28:45.539 { 00:28:45.539 "name": "nvme0", 00:28:45.539 "trtype": "tcp", 00:28:45.539 "traddr": "10.0.0.1", 00:28:45.539 "adrfam": "ipv4", 00:28:45.539 "trsvcid": "4420", 00:28:45.539 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:45.539 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:45.539 "prchk_reftag": false, 00:28:45.539 "prchk_guard": false, 00:28:45.539 "hdgst": false, 00:28:45.539 "ddgst": false, 00:28:45.539 "dhchap_key": "key2", 00:28:45.539 "method": "bdev_nvme_attach_controller", 00:28:45.539 "req_id": 1 00:28:45.539 } 00:28:45.539 Got JSON-RPC error response 00:28:45.539 response: 00:28:45.539 { 00:28:45.539 "code": -5, 00:28:45.539 "message": "Input/output error" 00:28:45.539 } 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.539 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.800 request: 00:28:45.800 { 00:28:45.800 "name": "nvme0", 00:28:45.800 "trtype": "tcp", 00:28:45.800 "traddr": "10.0.0.1", 00:28:45.800 "adrfam": "ipv4", 00:28:45.800 "trsvcid": "4420", 00:28:45.800 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:45.800 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:45.800 "prchk_reftag": false, 00:28:45.800 "prchk_guard": false, 00:28:45.800 "hdgst": false, 00:28:45.800 "ddgst": false, 00:28:45.800 "dhchap_key": "key1", 00:28:45.800 "dhchap_ctrlr_key": "ckey2", 00:28:45.800 "method": "bdev_nvme_attach_controller", 00:28:45.800 "req_id": 1 00:28:45.800 } 00:28:45.800 Got JSON-RPC error response 00:28:45.800 response: 00:28:45.800 { 00:28:45.800 "code": -5, 00:28:45.800 "message": "Input/output error" 00:28:45.800 } 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.800 rmmod nvme_tcp 00:28:45.800 rmmod nvme_fabrics 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 86504 ']' 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 86504 00:28:45.800 12:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 86504 ']' 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 86504 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86504 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86504' 00:28:45.801 killing process with pid 86504 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 86504 00:28:45.801 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 86504 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.060 12:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:48.594 12:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:51.125 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:51.125 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:51.126 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:52.063 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:28:52.063 12:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.xy0 /tmp/spdk.key-null.eF9 /tmp/spdk.key-sha256.7K4 /tmp/spdk.key-sha384.IwF /tmp/spdk.key-sha512.xfr /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:52.063 12:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:55.351 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:55.351 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:55.351 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:55.351 00:28:55.351 real 0m56.495s 00:28:55.351 user 0m51.499s 00:28:55.351 sys 0m12.650s 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.351 ************************************ 00:28:55.351 END TEST nvmf_auth_host 00:28:55.351 ************************************ 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.351 ************************************ 00:28:55.351 START TEST nvmf_digest 00:28:55.351 ************************************ 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:55.351 * Looking for test storage... 00:28:55.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.351 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:55.352 12:15:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:00.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:00.623 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:00.623 Found net devices under 0000:af:00.0: cvl_0_0 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:00.623 Found net devices under 0000:af:00.1: cvl_0_1 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:00.623 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.624 12:15:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:00.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:29:00.882 00:29:00.882 --- 10.0.0.2 ping statistics --- 00:29:00.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.882 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:29:00.882 00:29:00.882 --- 10.0.0.1 ping statistics --- 00:29:00.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.882 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.882 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.141 ************************************ 00:29:01.141 START TEST nvmf_digest_clean 00:29:01.141 ************************************ 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=101966 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 101966 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 101966 ']' 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.141 12:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.141 [2024-07-25 12:15:38.276379] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:01.141 [2024-07-25 12:15:38.276433] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.141 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.141 [2024-07-25 12:15:38.363533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.400 [2024-07-25 12:15:38.458169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.400 [2024-07-25 12:15:38.458203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.400 [2024-07-25 12:15:38.458213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.400 [2024-07-25 12:15:38.458222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.401 [2024-07-25 12:15:38.458230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.401 [2024-07-25 12:15:38.458251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.967 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:01.967 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:01.967 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.967 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:01.967 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.967 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:02.226 null0 00:29:02.226 [2024-07-25 12:15:39.360630] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.226 [2024-07-25 12:15:39.384822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=102240 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 102240 /var/tmp/bperf.sock 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 102240 ']' 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.226 12:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:02.226 [2024-07-25 12:15:39.439544] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:02.226 [2024-07-25 12:15:39.439600] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102240 ] 00:29:02.226 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.226 [2024-07-25 12:15:39.520619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.486 [2024-07-25 12:15:39.620771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.442 12:15:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.442 12:15:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:03.442 12:15:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:03.442 12:15:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:03.442 12:15:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:03.442 12:15:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.442 12:15:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.010 nvme0n1 00:29:04.010 12:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:04.010 12:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.269 Running I/O for 2 seconds... 00:29:06.172 00:29:06.172 Latency(us) 00:29:06.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.172 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:06.172 nvme0n1 : 2.01 14378.46 56.17 0.00 0.00 8890.66 5123.72 19660.80 00:29:06.172 =================================================================================================================== 00:29:06.172 Total : 14378.46 56.17 0.00 0.00 8890.66 5123.72 19660.80 00:29:06.172 0 00:29:06.172 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:06.172 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:06.172 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:06.172 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:06.172 | select(.opcode=="crc32c") 00:29:06.172 | "\(.module_name) \(.executed)"' 00:29:06.172 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 102240 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 102240 ']' 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 102240 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102240 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102240' 00:29:06.431 killing process with pid 102240 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 102240 00:29:06.431 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.431 00:29:06.431 Latency(us) 00:29:06.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.431 =================================================================================================================== 00:29:06.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.431 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 102240 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=103039 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 103039 /var/tmp/bperf.sock 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 103039 ']' 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.690 12:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:06.690 [2024-07-25 12:15:43.943835] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:06.690 [2024-07-25 12:15:43.943896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103039 ] 00:29:06.690 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.690 Zero copy mechanism will not be used. 00:29:06.690 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.948 [2024-07-25 12:15:44.024203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.948 [2024-07-25 12:15:44.128434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.884 12:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.884 12:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:07.884 12:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:07.884 12:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:07.884 12:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:08.143 12:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.143 12:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.402 nvme0n1 00:29:08.402 12:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:08.402 12:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.402 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.402 Zero copy mechanism will not be used. 00:29:08.402 Running I/O for 2 seconds... 00:29:10.935 00:29:10.935 Latency(us) 00:29:10.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.935 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:10.935 nvme0n1 : 2.00 3484.65 435.58 0.00 0.00 4585.94 1623.51 12332.68 00:29:10.935 =================================================================================================================== 00:29:10.935 Total : 3484.65 435.58 0.00 0.00 4585.94 1623.51 12332.68 00:29:10.935 0 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:10.935 | select(.opcode=="crc32c") 00:29:10.935 | "\(.module_name) \(.executed)"' 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 103039 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 103039 ']' 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 103039 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103039 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103039' 00:29:10.935 killing process with pid 103039 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 103039 00:29:10.935 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.935 00:29:10.935 Latency(us) 00:29:10.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.935 =================================================================================================================== 00:29:10.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.935 12:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 103039 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:10.935 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=103834 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 103834 /var/tmp/bperf.sock 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 103834 ']' 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.936 12:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.194 [2024-07-25 12:15:48.244218] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:11.194 [2024-07-25 12:15:48.244279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103834 ] 00:29:11.194 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.194 [2024-07-25 12:15:48.325007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.194 [2024-07-25 12:15:48.429243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.129 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.129 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:12.129 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:12.129 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:12.129 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:12.387 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.387 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.955 nvme0n1 00:29:12.955 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:12.955 12:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.955 Running I/O for 2 seconds... 00:29:14.868 00:29:14.868 Latency(us) 00:29:14.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.868 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.868 nvme0n1 : 2.01 18463.71 72.12 0.00 0.00 6918.19 3604.48 19303.33 00:29:14.868 =================================================================================================================== 00:29:14.868 Total : 18463.71 72.12 0.00 0.00 6918.19 3604.48 19303.33 00:29:14.868 0 00:29:14.868 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:14.868 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:14.868 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.868 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.868 | select(.opcode=="crc32c") 00:29:14.868 | "\(.module_name) \(.executed)"' 00:29:14.868 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 103834 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 103834 ']' 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 103834 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:15.127 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103834 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103834' 00:29:15.385 killing process with pid 103834 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 103834 00:29:15.385 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.385 00:29:15.385 Latency(us) 00:29:15.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.385 =================================================================================================================== 00:29:15.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 103834 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:15.385 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104623 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104623 /var/tmp/bperf.sock 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 104623 ']' 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:15.386 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:15.644 [2024-07-25 12:15:52.703916] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:15.644 [2024-07-25 12:15:52.703977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104623 ] 00:29:15.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.644 Zero copy mechanism will not be used. 00:29:15.644 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.644 [2024-07-25 12:15:52.784969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.644 [2024-07-25 12:15:52.884238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.644 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.644 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:15.644 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:15.644 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:15.644 12:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:16.212 12:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.212 12:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.470 nvme0n1 00:29:16.470 12:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:16.470 12:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.470 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.470 Zero copy mechanism will not be used. 00:29:16.470 Running I/O for 2 seconds... 00:29:19.034 00:29:19.034 Latency(us) 00:29:19.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.034 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:19.034 nvme0n1 : 2.00 5011.58 626.45 0.00 0.00 3185.19 2144.81 14596.65 00:29:19.034 =================================================================================================================== 00:29:19.034 Total : 5011.58 626.45 0.00 0.00 3185.19 2144.81 14596.65 00:29:19.034 0 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:19.034 | select(.opcode=="crc32c") 00:29:19.034 | "\(.module_name) \(.executed)"' 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104623 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 104623 ']' 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 104623 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.034 12:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104623 00:29:19.034 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:19.034 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:19.034 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104623' 00:29:19.034 killing process with pid 104623 00:29:19.034 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 104623 00:29:19.034 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.034 00:29:19.034 Latency(us) 00:29:19.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.035 =================================================================================================================== 00:29:19.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 104623 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 101966 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 101966 ']' 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 101966 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101966 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101966' 00:29:19.035 killing process with pid 101966 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 101966 00:29:19.035 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 101966 00:29:19.293 00:29:19.293 real 0m18.265s 00:29:19.293 user 0m36.623s 00:29:19.293 sys 0m4.260s 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.293 ************************************ 00:29:19.293 END TEST nvmf_digest_clean 00:29:19.293 ************************************ 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.293 ************************************ 00:29:19.293 START TEST nvmf_digest_error 00:29:19.293 ************************************ 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=105207 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 105207 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 105207 ']' 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:19.293 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.294 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:19.294 12:15:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.552 [2024-07-25 12:15:56.619145] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:19.552 [2024-07-25 12:15:56.619200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.552 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.552 [2024-07-25 12:15:56.706300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.552 [2024-07-25 12:15:56.795489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.552 [2024-07-25 12:15:56.795529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.552 [2024-07-25 12:15:56.795539] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.552 [2024-07-25 12:15:56.795549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.552 [2024-07-25 12:15:56.795556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.552 [2024-07-25 12:15:56.795583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.929 [2024-07-25 12:15:57.850637] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.929 null0 00:29:20.929 [2024-07-25 12:15:57.950675] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.929 [2024-07-25 12:15:57.974869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=105472 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 105472 /var/tmp/bperf.sock 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 105472 ']' 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.929 12:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.929 [2024-07-25 12:15:58.059801] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:20.929 [2024-07-25 12:15:58.059912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105472 ] 00:29:20.929 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.929 [2024-07-25 12:15:58.177649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.188 [2024-07-25 12:15:58.279573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.123 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:22.123 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:22.123 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.123 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.687 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:22.687 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.687 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.687 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.688 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.688 12:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.946 nvme0n1 00:29:22.946 12:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:22.946 12:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.946 12:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.946 12:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.946 12:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.946 12:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.946 Running I/O for 2 seconds... 00:29:22.946 [2024-07-25 12:16:00.219627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:22.946 [2024-07-25 12:16:00.219678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.946 [2024-07-25 12:16:00.219702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.946 [2024-07-25 12:16:00.241360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:22.946 [2024-07-25 12:16:00.241398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.946 [2024-07-25 12:16:00.241416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.263273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.263309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.263325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.276153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.276187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.276203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.293899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.293932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.293947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.307968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.308001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.308017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.328365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.328400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.328415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.350489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.350530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.350547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.370215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.370248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.370263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.391918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.391958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.391973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.407926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.407959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.407974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.427270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.427303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.427318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.442139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.442173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.442188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.464009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.464043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.464058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.486266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.486301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.486316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.205 [2024-07-25 12:16:00.504291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.205 [2024-07-25 12:16:00.504324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.205 [2024-07-25 12:16:00.504339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.519188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.519221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.519236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.541065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.541098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.541114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.555446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.555477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.555492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.576694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.576728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.576744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.592408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.592441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.592456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.606658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.606691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.606707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.620514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.620547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.620562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.635222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.635255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.635270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.649742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.649775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.649790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.665328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.665360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.665375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.686008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.686047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.686063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.706648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.706681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.706696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.720686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.720720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.720734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.744057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.744095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.744111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.465 [2024-07-25 12:16:00.763335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.465 [2024-07-25 12:16:00.763370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.465 [2024-07-25 12:16:00.763386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.779027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.779061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.779076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.798865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.798897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.798912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.818986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.819020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.819036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.834593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.834632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.834647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.853690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.853722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.853737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.867856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.867888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.867904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.888047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.888081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.888097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.908137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.908171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.908186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.923584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.923623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.923638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.945105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.945140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.945156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.965522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.965555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.965570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:00.979977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:00.980010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:00.980025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.724 [2024-07-25 12:16:01.000027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.724 [2024-07-25 12:16:01.000061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.724 [2024-07-25 12:16:01.000082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.725 [2024-07-25 12:16:01.020790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.725 [2024-07-25 12:16:01.020824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.725 [2024-07-25 12:16:01.020840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.042549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.042583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.042598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.064491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.064524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.064540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.086717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.086752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.086767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.105250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.105283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.105298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.119843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.119876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.119890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.141545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.141581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.141596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.162374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.162408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.162423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.178472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.178509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.178523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.193856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.193889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.193904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.212746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.212793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.228211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.228244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.228260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.249967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.250001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.250016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.984 [2024-07-25 12:16:01.271158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:23.984 [2024-07-25 12:16:01.271190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.984 [2024-07-25 12:16:01.271205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.242 [2024-07-25 12:16:01.289881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.289915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.289930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.309264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.309297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.309313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.324201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.324234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.324249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.344985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.345019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.345034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.360775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.360808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.360823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.381362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.381396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.381411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.402370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.402405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.402420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.419118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.419150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.419166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.432993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.433026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.433041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.446870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.446903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.446919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.461103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.461136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.461152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.478649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.478681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.478703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.493137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.493170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.493185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.510371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.510405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.510421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.243 [2024-07-25 12:16:01.525000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.243 [2024-07-25 12:16:01.525033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.243 [2024-07-25 12:16:01.525049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.543851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.543885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.543900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.563869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.563903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.563918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.578757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.578791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.578806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.594496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.594529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.594544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.609652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.609684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.609699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.630476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.630509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.630524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.650152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.650185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.650200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.670670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.670702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.670717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.689643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.689676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.689691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.704544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.704576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.704591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.724453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.724486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.724501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.745406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.745439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.745454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.760192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.760225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.760241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.781561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.781594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.781622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.502 [2024-07-25 12:16:01.796987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.502 [2024-07-25 12:16:01.797019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.502 [2024-07-25 12:16:01.797034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.818038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.818070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.818085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.835712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.835744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.835759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.849952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.849985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.850000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.864730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.864762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.864777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.884964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.884996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.885011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.900251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.900285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.900300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.921549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.921580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.921595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.937006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.937044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.937059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.957950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.957982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.957997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.979318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.979351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.979366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:01.999118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:01.999150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:01.999166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:02.014790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:02.014823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:02.014838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:02.036256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:02.036289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:02.036305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.762 [2024-07-25 12:16:02.053217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:24.762 [2024-07-25 12:16:02.053250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.762 [2024-07-25 12:16:02.053265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.066324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.066357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.081118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.081151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.081166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.098528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.098561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.098577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.112453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.112485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.112499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.134136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.134169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.134184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.156496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.156529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.156544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.171050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.171081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.171096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 [2024-07-25 12:16:02.192019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162c9f0) 00:29:25.021 [2024-07-25 12:16:02.192051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.021 [2024-07-25 12:16:02.192066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.021 00:29:25.021 Latency(us) 00:29:25.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.021 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:25.021 nvme0n1 : 2.01 14069.75 54.96 0.00 0.00 9084.11 5004.57 31933.91 00:29:25.021 =================================================================================================================== 00:29:25.021 Total : 14069.75 54.96 0.00 0.00 9084.11 5004.57 31933.91 00:29:25.021 0 00:29:25.021 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:25.021 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:25.021 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:25.021 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:25.021 | .driver_specific 00:29:25.021 | .nvme_error 00:29:25.021 | .status_code 00:29:25.021 | .command_transient_transport_error' 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 105472 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 105472 ']' 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 105472 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105472 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105472' 00:29:25.279 killing process with pid 105472 00:29:25.279 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 105472 00:29:25.279 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.279 00:29:25.279 Latency(us) 00:29:25.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.280 =================================================================================================================== 00:29:25.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.280 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 105472 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=106330 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 106330 /var/tmp/bperf.sock 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 106330 ']' 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:25.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.538 12:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.538 [2024-07-25 12:16:02.826245] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:25.538 [2024-07-25 12:16:02.826357] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106330 ] 00:29:25.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.538 Zero copy mechanism will not be used. 00:29:25.796 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.796 [2024-07-25 12:16:02.943033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.796 [2024-07-25 12:16:03.046088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.883 12:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.883 12:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:26.883 12:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:26.883 12:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.141 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:27.141 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.141 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.141 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.141 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.141 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.709 nvme0n1 00:29:27.709 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:27.709 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.709 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.709 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.709 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:27.709 12:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:27.709 Zero copy mechanism will not be used. 00:29:27.709 Running I/O for 2 seconds... 00:29:27.709 [2024-07-25 12:16:04.969452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.709 [2024-07-25 12:16:04.969502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.709 [2024-07-25 12:16:04.969520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.709 [2024-07-25 12:16:04.982239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.709 [2024-07-25 12:16:04.982280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.709 [2024-07-25 12:16:04.982297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.709 [2024-07-25 12:16:04.992309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.709 [2024-07-25 12:16:04.992352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.709 [2024-07-25 12:16:04.992368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.709 [2024-07-25 12:16:05.003162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.709 [2024-07-25 12:16:05.003197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.709 [2024-07-25 12:16:05.003213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.012916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.012950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.012965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.022645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.022680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.022695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.032562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.032597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.032621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.042198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.042233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.042248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.051475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.051508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.051524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.060577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.060619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.060635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.069345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.069378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.069392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.077846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.077878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.077893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.086208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.086240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.086255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.094711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.094745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.094759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.103202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.103236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.103251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.111653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.111689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.111704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.121016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.121052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.121067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.131121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.131157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.131172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.140171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.140204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.140218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.149092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.149126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.149146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.157888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.157922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.157937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.166766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.166800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.166814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.175745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.175778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.175793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.184295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.184328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.184343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.192665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.192699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.192713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.201404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.201437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.201452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.210038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.210071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.210085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.218470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.218503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.218518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.226842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.226880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.970 [2024-07-25 12:16:05.226894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.970 [2024-07-25 12:16:05.235496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.970 [2024-07-25 12:16:05.235530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.971 [2024-07-25 12:16:05.235545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.971 [2024-07-25 12:16:05.243727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.971 [2024-07-25 12:16:05.243760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.971 [2024-07-25 12:16:05.243774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.971 [2024-07-25 12:16:05.252181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.971 [2024-07-25 12:16:05.252215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.971 [2024-07-25 12:16:05.252229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.971 [2024-07-25 12:16:05.260683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.971 [2024-07-25 12:16:05.260717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.971 [2024-07-25 12:16:05.260732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.971 [2024-07-25 12:16:05.269139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:27.971 [2024-07-25 12:16:05.269173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.971 [2024-07-25 12:16:05.269188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.230 [2024-07-25 12:16:05.277751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.230 [2024-07-25 12:16:05.277784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-25 12:16:05.277799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.230 [2024-07-25 12:16:05.286389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.230 [2024-07-25 12:16:05.286422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-25 12:16:05.286437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.230 [2024-07-25 12:16:05.295234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.230 [2024-07-25 12:16:05.295266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-25 12:16:05.295281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.230 [2024-07-25 12:16:05.305092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.230 [2024-07-25 12:16:05.305127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-25 12:16:05.305142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.230 [2024-07-25 12:16:05.315468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.230 [2024-07-25 12:16:05.315504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-25 12:16:05.315520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.230 [2024-07-25 12:16:05.326212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.230 [2024-07-25 12:16:05.326249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-25 12:16:05.326265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.230 [2024-07-25 12:16:05.337936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.337972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.337988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.351148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.351184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.351200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.364947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.364983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.364999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.377141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.377177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.377193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.389683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.389720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.389736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.402799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.402836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.402857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.415816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.415851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.415866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.427424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.427460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.427475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.440737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.440772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.440787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.453909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.453946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.453961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.466491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.466527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.466544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.478595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.478638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.478654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.491356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.491392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.491408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.504333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.504370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.504385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.516709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.516751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.516767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.231 [2024-07-25 12:16:05.529395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.231 [2024-07-25 12:16:05.529433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-25 12:16:05.529449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.490 [2024-07-25 12:16:05.542253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.490 [2024-07-25 12:16:05.542290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.490 [2024-07-25 12:16:05.542306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.490 [2024-07-25 12:16:05.555649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.490 [2024-07-25 12:16:05.555685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.490 [2024-07-25 12:16:05.555700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.490 [2024-07-25 12:16:05.568319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.490 [2024-07-25 12:16:05.568354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.490 [2024-07-25 12:16:05.568369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.490 [2024-07-25 12:16:05.581191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.490 [2024-07-25 12:16:05.581226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.490 [2024-07-25 12:16:05.581242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.490 [2024-07-25 12:16:05.593413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.490 [2024-07-25 12:16:05.593450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.490 [2024-07-25 12:16:05.593466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.490 [2024-07-25 12:16:05.604273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.490 [2024-07-25 12:16:05.604308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.490 [2024-07-25 12:16:05.604323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.490 [2024-07-25 12:16:05.615399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.490 [2024-07-25 12:16:05.615434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.615449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.626337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.626372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.626387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.637937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.637973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.637988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.648401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.648435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.648451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.658628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.658663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.658678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.668410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.668445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.668460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.678309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.678345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.678359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.688330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.688367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.688383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.698643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.698678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.698693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.708410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.708445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.708470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.717958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.717992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.718007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.727715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.727761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.727777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.738156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.738192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.738208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.749362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.749397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.749413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.760230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.760266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.760281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.771057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.771093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.771108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.491 [2024-07-25 12:16:05.781594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.491 [2024-07-25 12:16:05.781639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.491 [2024-07-25 12:16:05.781654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.750 [2024-07-25 12:16:05.791703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.750 [2024-07-25 12:16:05.791739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.750 [2024-07-25 12:16:05.791754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.750 [2024-07-25 12:16:05.801628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.801664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.801679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.811250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.811286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.811301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.820949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.820985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.821001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.830142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.830177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.839182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.839216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.839232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.848951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.848986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.849000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.859215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.859250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.859265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.869025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.869059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.869075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.878024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.878059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.878080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.886711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.886745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.886759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.894926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.894961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.894976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.903511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.903545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.903560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.912662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.912697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.912713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.921232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.921267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.921282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.929714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.929749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.929765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.938670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.938704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.938719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.947189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.947223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.947237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.955406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.955445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.955459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.964000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.964034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.964048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.972379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.972412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.972427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.980845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.980879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.980894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.990043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.990078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.990093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:05.999517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:05.999552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:05.999566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:06.007831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:06.007865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:06.007879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:06.016081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:06.016114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:06.016129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:06.024320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.751 [2024-07-25 12:16:06.024353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.751 [2024-07-25 12:16:06.024368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.751 [2024-07-25 12:16:06.033038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.752 [2024-07-25 12:16:06.033073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.752 [2024-07-25 12:16:06.033087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.752 [2024-07-25 12:16:06.041903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:28.752 [2024-07-25 12:16:06.041938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.752 [2024-07-25 12:16:06.041953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.011 [2024-07-25 12:16:06.050798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.011 [2024-07-25 12:16:06.050831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.011 [2024-07-25 12:16:06.050848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.011 [2024-07-25 12:16:06.059497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.011 [2024-07-25 12:16:06.059531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.011 [2024-07-25 12:16:06.059546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.011 [2024-07-25 12:16:06.067992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.011 [2024-07-25 12:16:06.068025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.011 [2024-07-25 12:16:06.068041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.011 [2024-07-25 12:16:06.076619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.011 [2024-07-25 12:16:06.076652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.011 [2024-07-25 12:16:06.076668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.011 [2024-07-25 12:16:06.085162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.011 [2024-07-25 12:16:06.085195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.011 [2024-07-25 12:16:06.085210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.011 [2024-07-25 12:16:06.093725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.011 [2024-07-25 12:16:06.093758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.011 [2024-07-25 12:16:06.093773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.011 [2024-07-25 12:16:06.102420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.011 [2024-07-25 12:16:06.102453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.102473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.111075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.111109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.111124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.120229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.120264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.120279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.129818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.129854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.129869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.139122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.139158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.139173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.148093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.148126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.148142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.157101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.157135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.157150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.165500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.165534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.165550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.174638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.174672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.174687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.183823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.183863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.183878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.193028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.193062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.193077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.202459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.202493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.202509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.211253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.211288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.211304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.220114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.220152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.220167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.229507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.229541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.229556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.238009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.238044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.238058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.248177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.248212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.248228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.259060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.259095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.259111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.270514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.270548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.270563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.281275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.281309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.281324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.291485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.291519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.291534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.301366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.301401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.301416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.012 [2024-07-25 12:16:06.310466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.012 [2024-07-25 12:16:06.310500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.012 [2024-07-25 12:16:06.310514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.319679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.319714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.319729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.330544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.330579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.330595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.342989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.343025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.343040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.355910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.355946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.355966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.367830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.367864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.367880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.379702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.379737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.379751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.391259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.391293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.391309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.402659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.402694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.402709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.413774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.413810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.413825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.424720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.424753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.424767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.436471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.436506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.436522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.447536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.447570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.447586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.457901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.457941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.457956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.468013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.468048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.468063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.478275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.478309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.478324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.488140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.488178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.488193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.497584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.497627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.497642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.506645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.506678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.506692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.272 [2024-07-25 12:16:06.516362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.272 [2024-07-25 12:16:06.516397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.272 [2024-07-25 12:16:06.516411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.273 [2024-07-25 12:16:06.527806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.273 [2024-07-25 12:16:06.527840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.273 [2024-07-25 12:16:06.527855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.273 [2024-07-25 12:16:06.538722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.273 [2024-07-25 12:16:06.538756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.273 [2024-07-25 12:16:06.538771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.273 [2024-07-25 12:16:06.548849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.273 [2024-07-25 12:16:06.548884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.273 [2024-07-25 12:16:06.548898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.273 [2024-07-25 12:16:06.558464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.273 [2024-07-25 12:16:06.558499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.273 [2024-07-25 12:16:06.558514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.273 [2024-07-25 12:16:06.567840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.273 [2024-07-25 12:16:06.567874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.273 [2024-07-25 12:16:06.567889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.532 [2024-07-25 12:16:06.576957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.532 [2024-07-25 12:16:06.576993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.532 [2024-07-25 12:16:06.577008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.532 [2024-07-25 12:16:06.585954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.532 [2024-07-25 12:16:06.585988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.532 [2024-07-25 12:16:06.586004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.595443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.595477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.595492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.604868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.604902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.604917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.615122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.615157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.615173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.625554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.625590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.625622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.635027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.635062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.635076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.644666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.644700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.644715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.655209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.655244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.655259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.666641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.666675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.666690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.677566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.677600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.677627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.687855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.687889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.687905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.697691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.697725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.697740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.706745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.706778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.706793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.717101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.717141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.717156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.727227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.727262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.727277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.737083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.737118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.737133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.747620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.747658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.747673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.757213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.757249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.757264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.765983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.766017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.766032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.775059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.775094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.775109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.784032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.784065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.784080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.792710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.792744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.792759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.801558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.801594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.801618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.810783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.810818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.810833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.820197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.820233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.820248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.533 [2024-07-25 12:16:06.829255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.533 [2024-07-25 12:16:06.829289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.533 [2024-07-25 12:16:06.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.838367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.838403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.838419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.847700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.847734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.847750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.856389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.856423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.856438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.865190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.865224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.865239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.873515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.873549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.873570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.881904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.881938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.881953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.890248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.890282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.890296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.898445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.898479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.898493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.906684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.906718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.906732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.915109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.915143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.915158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.923347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.923381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.923396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.931988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.932022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.932036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.940412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.940446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.940461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.948893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.948937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.948951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.793 [2024-07-25 12:16:06.957311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe7810) 00:29:29.793 [2024-07-25 12:16:06.957346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.793 [2024-07-25 12:16:06.957361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.793 00:29:29.793 Latency(us) 00:29:29.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:29.793 nvme0n1 : 2.00 3147.21 393.40 0.00 0.00 5077.07 1377.75 14477.50 00:29:29.793 =================================================================================================================== 00:29:29.793 Total : 3147.21 393.40 0.00 0.00 5077.07 1377.75 14477.50 00:29:29.793 0 00:29:29.793 12:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:29.793 12:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:29.793 12:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:29.793 12:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:29.793 | .driver_specific 00:29:29.793 | .nvme_error 00:29:29.793 | .status_code 00:29:29.793 | .command_transient_transport_error' 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 203 > 0 )) 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 106330 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 106330 ']' 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 106330 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106330 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106330' 00:29:30.052 killing process with pid 106330 00:29:30.052 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 106330 00:29:30.052 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.052 00:29:30.052 Latency(us) 00:29:30.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.053 =================================================================================================================== 00:29:30.053 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.053 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 106330 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=107238 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 107238 /var/tmp/bperf.sock 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 107238 ']' 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.311 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.312 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.312 12:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.312 [2024-07-25 12:16:07.581182] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:30.312 [2024-07-25 12:16:07.581295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107238 ] 00:29:30.570 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.570 [2024-07-25 12:16:07.698396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.570 [2024-07-25 12:16:07.801565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.506 12:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.506 12:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:31.506 12:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.506 12:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.074 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:32.074 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.074 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.074 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.074 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.074 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.641 nvme0n1 00:29:32.642 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:32.642 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.642 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.642 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.642 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:32.642 12:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.900 Running I/O for 2 seconds... 00:29:32.900 [2024-07-25 12:16:10.052879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e8088 00:29:32.900 [2024-07-25 12:16:10.054261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.054306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.066001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f8a50 00:29:32.900 [2024-07-25 12:16:10.067343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.067377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.081702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f46d0 00:29:32.900 [2024-07-25 12:16:10.083612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.083645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.098392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e49b0 00:29:32.900 [2024-07-25 12:16:10.099964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.100000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.112406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f8618 00:29:32.900 [2024-07-25 12:16:10.114353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.114386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.129992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e7c50 00:29:32.900 [2024-07-25 12:16:10.132219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.132250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.140295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e5a90 00:29:32.900 [2024-07-25 12:16:10.141218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.141249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.153512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ea248 00:29:32.900 [2024-07-25 12:16:10.154433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.154463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.169415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:32.900 [2024-07-25 12:16:10.170508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.170539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.183293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:32.900 [2024-07-25 12:16:10.184397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.184428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:32.900 [2024-07-25 12:16:10.197130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:32.900 [2024-07-25 12:16:10.198231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.900 [2024-07-25 12:16:10.198263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.158 [2024-07-25 12:16:10.211047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.212149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.212180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.224900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.226010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.226039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.238764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.239859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.239889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.252662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.253758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.253788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.266483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.267578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.267616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.280343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.281453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.281485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.294236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.295329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.295360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.308063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.309230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.309261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.321944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.323041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.323071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.335806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.336905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.336935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.349628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.350726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.350756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.363514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.364614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.364644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.377370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.378470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.378499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.391213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.392309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.405100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.406196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.406226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.418919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.420024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.420054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.432814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.433911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.433940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.159 [2024-07-25 12:16:10.446683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.159 [2024-07-25 12:16:10.447776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.159 [2024-07-25 12:16:10.447805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.460526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.461627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.461657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.474391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.475493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.488222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.489318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.489347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.502060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.503158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.503187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.515992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.517097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.517126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.529832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.530926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.530956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.543703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.544801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.544831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.557571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.558667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.558697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.571384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.572479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.572508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.585261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.586350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.586379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.599124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.600219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.600249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.612960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.614058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.614088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.626864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.627968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.627998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.640706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.641802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.641831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.654549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.655646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.655675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.668658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.669756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.669785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.682464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.683583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.696319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.697422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.697452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.418 [2024-07-25 12:16:10.710184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.418 [2024-07-25 12:16:10.711280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.418 [2024-07-25 12:16:10.711310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.723981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.725073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.725102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.737878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.738969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.738998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.751708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.752809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.752843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.765519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.766619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.766648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.779378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.780471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.780500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.793189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.794280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.794309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.807018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.808111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.808141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.820876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:33.678 [2024-07-25 12:16:10.822294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.822324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.835444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ec408 00:29:33.678 [2024-07-25 12:16:10.837165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.678 [2024-07-25 12:16:10.837195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:33.678 [2024-07-25 12:16:10.851269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ea248 00:29:33.678 [2024-07-25 12:16:10.853096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.853125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.863892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee190 00:29:33.679 [2024-07-25 12:16:10.865188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.865218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.879913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fd640 00:29:33.679 [2024-07-25 12:16:10.881952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.881981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.891476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ea248 00:29:33.679 [2024-07-25 12:16:10.892744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.892773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.907375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e4de8 00:29:33.679 [2024-07-25 12:16:10.908774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.908805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.922166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e7818 00:29:33.679 [2024-07-25 12:16:10.923744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.923775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.935177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f9f68 00:29:33.679 [2024-07-25 12:16:10.936795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.936825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.948680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ec840 00:29:33.679 [2024-07-25 12:16:10.949667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.949696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.962859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f1ca0 00:29:33.679 [2024-07-25 12:16:10.964464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.964494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:33.679 [2024-07-25 12:16:10.977402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ec840 00:29:33.679 [2024-07-25 12:16:10.978645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.679 [2024-07-25 12:16:10.978673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:33.939 [2024-07-25 12:16:10.991172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190eb328 00:29:33.939 [2024-07-25 12:16:10.992746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.939 [2024-07-25 12:16:10.992776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:33.939 [2024-07-25 12:16:11.005671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fa3a0 00:29:33.939 [2024-07-25 12:16:11.006831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.939 [2024-07-25 12:16:11.006862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:33.939 [2024-07-25 12:16:11.019879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f6458 00:29:33.939 [2024-07-25 12:16:11.021640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.939 [2024-07-25 12:16:11.021670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:33.939 [2024-07-25 12:16:11.035981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f1430 00:29:33.940 [2024-07-25 12:16:11.038056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.038085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.049289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f6458 00:29:33.940 [2024-07-25 12:16:11.050799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.050829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.061732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ddc00 00:29:33.940 [2024-07-25 12:16:11.063744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.063773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.076264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e9168 00:29:33.940 [2024-07-25 12:16:11.077779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.077808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.090158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e12d8 00:29:33.940 [2024-07-25 12:16:11.092155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.092184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.107779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ef270 00:29:33.940 [2024-07-25 12:16:11.110035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.110065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.117987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e2c28 00:29:33.940 [2024-07-25 12:16:11.118988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.119023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.133552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f46d0 00:29:33.940 [2024-07-25 12:16:11.135711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.135755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.151157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fc998 00:29:33.940 [2024-07-25 12:16:11.153571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.153601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.161386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f8e88 00:29:33.940 [2024-07-25 12:16:11.162425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.162455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.175454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f8e88 00:29:33.940 [2024-07-25 12:16:11.176559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.176589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.189275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f8e88 00:29:33.940 [2024-07-25 12:16:11.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.190449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.203764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e95a0 00:29:33.940 [2024-07-25 12:16:11.205345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.205376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.218088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:33.940 [2024-07-25 12:16:11.219366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.219396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:33.940 [2024-07-25 12:16:11.231949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:33.940 [2024-07-25 12:16:11.233256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.940 [2024-07-25 12:16:11.233286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.199 [2024-07-25 12:16:11.245797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.199 [2024-07-25 12:16:11.247120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.199 [2024-07-25 12:16:11.247150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.199 [2024-07-25 12:16:11.259594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.199 [2024-07-25 12:16:11.260903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.199 [2024-07-25 12:16:11.260932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.199 [2024-07-25 12:16:11.273470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.199 [2024-07-25 12:16:11.274772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.199 [2024-07-25 12:16:11.274801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.199 [2024-07-25 12:16:11.287309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.199 [2024-07-25 12:16:11.288586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.199 [2024-07-25 12:16:11.288620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.199 [2024-07-25 12:16:11.301126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.199 [2024-07-25 12:16:11.302454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.199 [2024-07-25 12:16:11.302485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.199 [2024-07-25 12:16:11.315007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.199 [2024-07-25 12:16:11.316345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.316375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.328842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.200 [2024-07-25 12:16:11.330114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.330143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.342667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ebfd0 00:29:34.200 [2024-07-25 12:16:11.343976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.344005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.357106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fda78 00:29:34.200 [2024-07-25 12:16:11.358872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.358902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.371395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e95a0 00:29:34.200 [2024-07-25 12:16:11.372861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.372891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.385239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e95a0 00:29:34.200 [2024-07-25 12:16:11.386722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.386751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.399076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e95a0 00:29:34.200 [2024-07-25 12:16:11.400537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.400567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.412873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e95a0 00:29:34.200 [2024-07-25 12:16:11.414352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.414382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.426759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e95a0 00:29:34.200 [2024-07-25 12:16:11.428211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.428240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.439643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fb048 00:29:34.200 [2024-07-25 12:16:11.441106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.441135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.455325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ed920 00:29:34.200 [2024-07-25 12:16:11.456960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.456991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.469403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ed920 00:29:34.200 [2024-07-25 12:16:11.471061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.471090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.483212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ed920 00:29:34.200 [2024-07-25 12:16:11.484873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.484906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.200 [2024-07-25 12:16:11.497075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ed920 00:29:34.200 [2024-07-25 12:16:11.498738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.200 [2024-07-25 12:16:11.498768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.510929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ed920 00:29:34.459 [2024-07-25 12:16:11.512575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.512610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.524739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ed920 00:29:34.459 [2024-07-25 12:16:11.526310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.526339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.539455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e88f8 00:29:34.459 [2024-07-25 12:16:11.541293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.541324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.550835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e27f0 00:29:34.459 [2024-07-25 12:16:11.551907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.551936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.563719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fdeb0 00:29:34.459 [2024-07-25 12:16:11.564777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.564805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.579722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e8088 00:29:34.459 [2024-07-25 12:16:11.580886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.580915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.595630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e27f0 00:29:34.459 [2024-07-25 12:16:11.597601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.597634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.609117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f6458 00:29:34.459 [2024-07-25 12:16:11.610440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.610470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.622866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f5378 00:29:34.459 [2024-07-25 12:16:11.624291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.624321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.636670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e2c28 00:29:34.459 [2024-07-25 12:16:11.638108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.638139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.651053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fa7d8 00:29:34.459 [2024-07-25 12:16:11.652987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.653017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.665470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ed920 00:29:34.459 [2024-07-25 12:16:11.667201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.667231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.678350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190eb760 00:29:34.459 [2024-07-25 12:16:11.680148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.680178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.691367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.459 [2024-07-25 12:16:11.692404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.692432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.705446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.459 [2024-07-25 12:16:11.706481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.706510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.719278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.459 [2024-07-25 12:16:11.720353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.459 [2024-07-25 12:16:11.720382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:34.459 [2024-07-25 12:16:11.733154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.460 [2024-07-25 12:16:11.734201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.460 [2024-07-25 12:16:11.734233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:34.460 [2024-07-25 12:16:11.746962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.460 [2024-07-25 12:16:11.747934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.460 [2024-07-25 12:16:11.747965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.760845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.718 [2024-07-25 12:16:11.761871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.761900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.774716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.718 [2024-07-25 12:16:11.775707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.775736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.790283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee5c8 00:29:34.718 [2024-07-25 12:16:11.791975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.792005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.803609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ddc00 00:29:34.718 [2024-07-25 12:16:11.804735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.804766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.817337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ddc00 00:29:34.718 [2024-07-25 12:16:11.818621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.818651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.831413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ddc00 00:29:34.718 [2024-07-25 12:16:11.832614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.832645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.845859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f4b08 00:29:34.718 [2024-07-25 12:16:11.847554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.847590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.860250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:34.718 [2024-07-25 12:16:11.861652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.861682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.875879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fac10 00:29:34.718 [2024-07-25 12:16:11.877927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.877957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.889294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fda78 00:29:34.718 [2024-07-25 12:16:11.890765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.718 [2024-07-25 12:16:11.890795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:34.718 [2024-07-25 12:16:11.904766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e38d0 00:29:34.719 [2024-07-25 12:16:11.907071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:11.907101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:11.915112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ee190 00:29:34.719 [2024-07-25 12:16:11.916124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:11.916153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:11.929440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190de038 00:29:34.719 [2024-07-25 12:16:11.930590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:11.930627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:11.945351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190f2510 00:29:34.719 [2024-07-25 12:16:11.947010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:11.947040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:11.956913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190de8a8 00:29:34.719 [2024-07-25 12:16:11.957860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:11.957890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:11.972926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e0a68 00:29:34.719 [2024-07-25 12:16:11.974003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:11.974033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:11.988899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190fda78 00:29:34.719 [2024-07-25 12:16:11.990764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:11.990794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:12.000487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190ddc00 00:29:34.719 [2024-07-25 12:16:12.001620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:12.001650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:34.719 [2024-07-25 12:16:12.016441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e4140 00:29:34.719 [2024-07-25 12:16:12.017811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.719 [2024-07-25 12:16:12.017841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:34.977 [2024-07-25 12:16:12.030276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e4140 00:29:34.977 [2024-07-25 12:16:12.031607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.978 [2024-07-25 12:16:12.031637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:34.978 [2024-07-25 12:16:12.044134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed5c0) with pdu=0x2000190e4140 00:29:34.978 [2024-07-25 12:16:12.045453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.978 [2024-07-25 12:16:12.045482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:34.978 00:29:34.978 Latency(us) 00:29:34.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.978 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.978 nvme0n1 : 2.01 18179.13 71.01 0.00 0.00 7027.07 3559.80 17992.61 00:29:34.978 =================================================================================================================== 00:29:34.978 Total : 18179.13 71.01 0.00 0.00 7027.07 3559.80 17992.61 00:29:34.978 0 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:34.978 | .driver_specific 00:29:34.978 | .nvme_error 00:29:34.978 | .status_code 00:29:34.978 | .command_transient_transport_error' 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 107238 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 107238 ']' 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 107238 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.978 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 107238 00:29:35.236 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:35.236 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:35.236 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 107238' 00:29:35.236 killing process with pid 107238 00:29:35.236 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 107238 00:29:35.236 Received shutdown signal, test time was about 2.000000 seconds 00:29:35.236 00:29:35.236 Latency(us) 00:29:35.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.236 =================================================================================================================== 00:29:35.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 107238 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=108120 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 108120 /var/tmp/bperf.sock 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 108120 ']' 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.237 12:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.495 [2024-07-25 12:16:12.602903] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:35.495 [2024-07-25 12:16:12.603016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108120 ] 00:29:35.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.495 Zero copy mechanism will not be used. 00:29:35.495 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.495 [2024-07-25 12:16:12.719272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.754 [2024-07-25 12:16:12.822821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.690 12:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.690 12:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:36.690 12:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.690 12:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.258 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:37.258 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.258 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.258 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.258 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.258 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.516 nvme0n1 00:29:37.516 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:37.516 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.516 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.516 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.516 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:37.516 12:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:37.516 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:37.516 Zero copy mechanism will not be used. 00:29:37.516 Running I/O for 2 seconds... 00:29:37.516 [2024-07-25 12:16:14.798100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.516 [2024-07-25 12:16:14.798656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.516 [2024-07-25 12:16:14.798698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.516 [2024-07-25 12:16:14.808426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.516 [2024-07-25 12:16:14.808964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.516 [2024-07-25 12:16:14.809000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.817222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.817325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.817357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.825551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.826083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.826117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.832894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.833419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.833452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.839911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.840435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.840469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.847370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.847930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.847962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.855877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.856415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.856448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.864943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.865495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.865526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.874622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.875144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.875175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.884004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.884522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.884553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.892720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.893240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.893277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.901821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.902388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.911588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.912136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.775 [2024-07-25 12:16:14.912167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.775 [2024-07-25 12:16:14.920208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.775 [2024-07-25 12:16:14.920524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.920554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.928471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.928973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.929005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.937070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.937567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.937598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.945526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.946022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.946053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.954114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.954731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.954764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.963713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.964216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.964247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.971927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.972424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.972456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.979357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.979852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.979883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.987702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.988312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.988344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:14.996067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:14.996566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:14.996597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.004116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.004612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.004643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.013027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.013524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.013555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.020937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.021426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.021458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.028899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.029376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.029407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.037428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.037909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.037946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.046528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.047081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.047112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.055304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.055812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.055844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.063833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.064337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.064367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.776 [2024-07-25 12:16:15.072367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:37.776 [2024-07-25 12:16:15.072876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.776 [2024-07-25 12:16:15.072906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.080138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.080611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.080643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.087143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.087614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.087658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.094006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.094468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.094498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.100696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.101155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.101185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.107435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.107897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.107928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.114082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.114545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.114576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.121468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.121943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.121975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.128733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.129196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.129228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.135943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.136394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.136425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.036 [2024-07-25 12:16:15.143344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.036 [2024-07-25 12:16:15.143800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.036 [2024-07-25 12:16:15.143831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.151572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.152062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.152093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.159542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.160011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.160043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.167014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.167483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.167514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.173942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.174397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.174427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.180346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.180808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.180839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.186613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.187070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.187101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.193684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.194142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.194172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.202106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.202554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.202585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.209332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.209813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.209844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.216005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.216471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.216502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.222519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.222988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.223019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.228962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.229409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.229445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.234793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.235234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.235265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.240768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.241208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.241239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.246495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.246939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.246969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.252130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.252530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.252560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.259124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.259649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.259680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.268128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.268551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.268582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.273856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.274223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.274255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.278978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.279338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.279368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.284026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.284431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.284461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.289709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.290086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.290117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.294741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.295112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.295142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.299925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.300289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.300320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.305370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.305776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.305806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.311085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.311428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.037 [2024-07-25 12:16:15.311460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.037 [2024-07-25 12:16:15.316116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.037 [2024-07-25 12:16:15.316472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.038 [2024-07-25 12:16:15.316503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.038 [2024-07-25 12:16:15.321123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.038 [2024-07-25 12:16:15.321476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.038 [2024-07-25 12:16:15.321507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.038 [2024-07-25 12:16:15.326152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.038 [2024-07-25 12:16:15.326496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.038 [2024-07-25 12:16:15.326527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.038 [2024-07-25 12:16:15.331351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.038 [2024-07-25 12:16:15.331717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.038 [2024-07-25 12:16:15.331749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.297 [2024-07-25 12:16:15.336920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.297 [2024-07-25 12:16:15.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.297 [2024-07-25 12:16:15.337304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.297 [2024-07-25 12:16:15.342979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.297 [2024-07-25 12:16:15.343425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.297 [2024-07-25 12:16:15.343456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.297 [2024-07-25 12:16:15.350639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.297 [2024-07-25 12:16:15.351123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.297 [2024-07-25 12:16:15.351154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.297 [2024-07-25 12:16:15.359433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.297 [2024-07-25 12:16:15.359795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.297 [2024-07-25 12:16:15.359826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.297 [2024-07-25 12:16:15.366325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.297 [2024-07-25 12:16:15.366677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.297 [2024-07-25 12:16:15.366708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.373385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.373725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.373755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.380533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.380860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.380891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.386458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.386802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.386838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.391842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.392286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.392317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.397037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.397385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.397415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.402272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.402634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.402664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.407690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.408033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.408063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.412927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.413301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.413331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.418996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.419335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.419365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.424457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.424863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.424895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.429951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.430329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.430360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.435173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.435541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.435572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.440459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.440828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.440859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.445568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.445939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.445970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.451522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.451886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.451916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.458028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.458367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.458397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.464906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.465245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.465276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.473289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.473749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.473779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.482760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.483151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.483181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.490233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.490734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.490764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.497798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.498153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.498183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.504615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.505000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.505032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.511264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.511638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.511668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.518561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.518891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.518922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.525696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.526048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.526078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.532188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.532524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.298 [2024-07-25 12:16:15.532555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.298 [2024-07-25 12:16:15.539960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.298 [2024-07-25 12:16:15.540294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.540324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.546996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.547339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.547371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.553169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.553587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.553630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.559159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.559542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.559572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.565937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.566294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.566324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.573756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.574210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.574240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.582194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.582543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.582573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.589021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.589377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.589407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.299 [2024-07-25 12:16:15.595589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.299 [2024-07-25 12:16:15.595963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.299 [2024-07-25 12:16:15.595994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.601757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.602111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.602142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.608679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.609143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.609174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.616191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.616528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.623672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.624101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.624131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.630192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.630542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.630573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.635675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.636023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.636053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.642393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.642748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.642778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.648620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.649020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.649050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.654125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.654474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.654505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.659289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.659637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.659670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.664994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.665372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.665409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.671292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.671655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.671686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.676742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.677099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.677129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.681937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.682289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.682320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.687115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.687464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.687494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.692304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.692660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.559 [2024-07-25 12:16:15.692690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.559 [2024-07-25 12:16:15.697524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.559 [2024-07-25 12:16:15.697866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.697896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.703044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.703398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.703428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.708612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.708969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.708998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.714830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.715178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.715208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.720110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.720462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.720493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.727924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.728584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.728622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.736779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.737149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.737179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.742321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.742685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.742716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.747921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.748255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.748287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.753801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.754213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.754243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.760424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.760772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.760803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.765829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.766226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.771065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.771411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.776305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.776662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.776693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.783076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.783452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.783483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.789159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.789502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.789534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.794520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.794857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.794887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.799800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.800153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.800182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.805079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.805434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.805465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.810438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.810807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.810837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.815889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.816223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.816259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.821256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.821619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.821649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.827472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.828039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.828069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.836364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.836756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.844243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.560 [2024-07-25 12:16:15.844576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.560 [2024-07-25 12:16:15.844613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.560 [2024-07-25 12:16:15.850971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.561 [2024-07-25 12:16:15.851359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.561 [2024-07-25 12:16:15.851390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.561 [2024-07-25 12:16:15.856997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.561 [2024-07-25 12:16:15.857351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.561 [2024-07-25 12:16:15.857382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.862455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.862879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.862909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.867815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.868163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.868194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.873219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.873563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.873593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.879387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.879740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.879771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.884750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.885116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.885146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.890113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.890457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.890487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.895419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.895755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.895785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.901719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.902096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.902126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.908673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.909028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.909058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.915317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.915597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.820 [2024-07-25 12:16:15.915638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.820 [2024-07-25 12:16:15.922173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.820 [2024-07-25 12:16:15.922434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.922464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.928341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.928594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.928634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.933561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.933815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.933845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.938787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.939045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.939075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.944243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.944473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.949429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.949689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.949719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.954733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.954983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.955013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.960699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.960968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.960998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.966091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.966331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.966361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.971390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.971651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.971687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.976642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.976887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.976917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.981931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.982169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.982199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.987084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.987345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.987376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.992277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.992515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.992545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:15.997453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:15.997714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:15.997744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.002661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.002903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.002933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.007876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.008133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.008163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.013092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.013340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.013370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.018274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.018533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.018563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.023796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.024045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.024075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.030141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.030537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.030567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.038833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.039086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.039116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.046764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.047034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.047064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.053670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.053946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.053976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.059033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.059300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.059331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.064383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.064651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.064682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.821 [2024-07-25 12:16:16.069562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.821 [2024-07-25 12:16:16.069837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.821 [2024-07-25 12:16:16.069867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.822 [2024-07-25 12:16:16.074757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.822 [2024-07-25 12:16:16.074990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.822 [2024-07-25 12:16:16.075021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.822 [2024-07-25 12:16:16.080630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.822 [2024-07-25 12:16:16.080885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.822 [2024-07-25 12:16:16.080915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.822 [2024-07-25 12:16:16.085808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.822 [2024-07-25 12:16:16.086045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.822 [2024-07-25 12:16:16.086075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.822 [2024-07-25 12:16:16.093280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.822 [2024-07-25 12:16:16.093534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.822 [2024-07-25 12:16:16.093565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.822 [2024-07-25 12:16:16.101122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.822 [2024-07-25 12:16:16.101376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.822 [2024-07-25 12:16:16.101406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.822 [2024-07-25 12:16:16.108486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.822 [2024-07-25 12:16:16.108768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.822 [2024-07-25 12:16:16.108797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.822 [2024-07-25 12:16:16.115421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:38.822 [2024-07-25 12:16:16.115720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.822 [2024-07-25 12:16:16.115750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.121974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.081 [2024-07-25 12:16:16.122221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.081 [2024-07-25 12:16:16.122252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.127238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.081 [2024-07-25 12:16:16.127490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.081 [2024-07-25 12:16:16.127525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.132468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.081 [2024-07-25 12:16:16.132717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.081 [2024-07-25 12:16:16.132750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.137865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.081 [2024-07-25 12:16:16.138121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.081 [2024-07-25 12:16:16.138151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.143044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.081 [2024-07-25 12:16:16.143281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.081 [2024-07-25 12:16:16.143311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.149451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.081 [2024-07-25 12:16:16.149715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.081 [2024-07-25 12:16:16.149745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.154720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.081 [2024-07-25 12:16:16.154970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.081 [2024-07-25 12:16:16.155000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.081 [2024-07-25 12:16:16.159883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.160131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.160161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.165062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.165306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.165336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.170394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.170648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.170678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.175970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.176223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.176253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.181600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.181861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.181891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.187067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.187341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.187372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.192554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.192806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.192836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.197825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.198084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.198114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.203091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.203345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.203375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.208204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.208465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.208495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.213611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.213978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.214008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.221105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.221441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.221476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.229920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.230353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.230384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.236216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.236522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.236551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.241589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.241917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.241947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.247634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.248023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.248054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.254882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.255232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.260264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.260526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.260556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.265629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.265924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.265955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.271031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.271325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.271355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.276238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.276496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.276526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.283173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.283527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.283558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.290563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.290810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.290841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.298443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.298723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.298754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.304985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.305252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.305283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.311925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.312219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.312249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.082 [2024-07-25 12:16:16.319366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.082 [2024-07-25 12:16:16.319655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.082 [2024-07-25 12:16:16.319687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.083 [2024-07-25 12:16:16.328084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.083 [2024-07-25 12:16:16.328493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.083 [2024-07-25 12:16:16.328524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.083 [2024-07-25 12:16:16.336738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.083 [2024-07-25 12:16:16.337051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.083 [2024-07-25 12:16:16.337082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.083 [2024-07-25 12:16:16.346035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.083 [2024-07-25 12:16:16.346336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.083 [2024-07-25 12:16:16.346365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.083 [2024-07-25 12:16:16.355014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.083 [2024-07-25 12:16:16.355320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.083 [2024-07-25 12:16:16.355349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.083 [2024-07-25 12:16:16.363851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.083 [2024-07-25 12:16:16.364153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.083 [2024-07-25 12:16:16.364184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.083 [2024-07-25 12:16:16.372217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.083 [2024-07-25 12:16:16.372511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.083 [2024-07-25 12:16:16.372541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.381136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.381428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.381458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.389889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.390200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.390231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.398741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.399143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.399173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.407875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.408284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.408314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.416622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.416875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.416910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.425637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.425956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.425986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.434579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.434885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.434916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.443161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.443462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.443494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.452871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.453196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.453227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.461725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.462016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.462046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.469765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.470081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.470111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.479010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.479283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.479313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.488764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.489135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.489166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.499399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.499862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.499893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.508839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.509222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.509252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.518056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.518394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.518426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.527080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.527538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.527568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.536145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.536501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.536532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.545038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.545400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.545431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.554770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.555149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.555179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.564293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.564573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-07-25 12:16:16.564614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.343 [2024-07-25 12:16:16.573530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.343 [2024-07-25 12:16:16.573831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.573861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.344 [2024-07-25 12:16:16.581704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.344 [2024-07-25 12:16:16.581953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.581983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.344 [2024-07-25 12:16:16.590904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.344 [2024-07-25 12:16:16.591179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.591210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.344 [2024-07-25 12:16:16.599352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.344 [2024-07-25 12:16:16.599769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.599799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.344 [2024-07-25 12:16:16.609026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.344 [2024-07-25 12:16:16.609265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.609296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.344 [2024-07-25 12:16:16.617502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.344 [2024-07-25 12:16:16.617913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.617943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.344 [2024-07-25 12:16:16.625495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.344 [2024-07-25 12:16:16.625930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.625961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.344 [2024-07-25 12:16:16.634261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.344 [2024-07-25 12:16:16.634774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.344 [2024-07-25 12:16:16.634804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.643593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.644065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.644095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.653492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.653790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.653830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.661180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.661600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.661638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.671089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.671505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.671535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.681960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.682293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.682324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.690621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.690928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.690959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.700454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.700861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.700892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.710749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.711163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.711193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.719749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.720017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.720048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.729172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.729674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.729704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.738496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.738790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.738820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.748339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.748695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.748726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.758010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.758291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.758324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.767244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.767496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.767527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.776945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.777377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.777407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.603 [2024-07-25 12:16:16.785517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ed760) with pdu=0x2000190fef90 00:29:39.603 [2024-07-25 12:16:16.785765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.603 [2024-07-25 12:16:16.785795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.603 00:29:39.603 Latency(us) 00:29:39.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.603 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:39.603 nvme0n1 : 2.00 4412.84 551.61 0.00 0.00 3616.43 2398.02 13464.67 00:29:39.603 =================================================================================================================== 00:29:39.603 Total : 4412.84 551.61 0.00 0.00 3616.43 2398.02 13464.67 00:29:39.603 0 00:29:39.603 12:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:39.603 12:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:39.603 12:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:39.603 12:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:39.603 | .driver_specific 00:29:39.603 | .nvme_error 00:29:39.603 | .status_code 00:29:39.603 | .command_transient_transport_error' 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 285 > 0 )) 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 108120 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 108120 ']' 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 108120 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108120 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108120' 00:29:39.862 killing process with pid 108120 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 108120 00:29:39.862 Received shutdown signal, test time was about 2.000000 seconds 00:29:39.862 00:29:39.862 Latency(us) 00:29:39.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.862 =================================================================================================================== 00:29:39.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.862 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 108120 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 105207 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 105207 ']' 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 105207 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105207 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105207' 00:29:40.120 killing process with pid 105207 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 105207 00:29:40.120 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 105207 00:29:40.379 00:29:40.379 real 0m21.029s 00:29:40.379 user 0m44.042s 00:29:40.379 sys 0m4.715s 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.379 ************************************ 00:29:40.379 END TEST nvmf_digest_error 00:29:40.379 ************************************ 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.379 rmmod nvme_tcp 00:29:40.379 rmmod nvme_fabrics 00:29:40.379 rmmod nvme_keyring 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.379 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 105207 ']' 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 105207 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 105207 ']' 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 105207 00:29:40.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (105207) - No such process 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 105207 is not found' 00:29:40.637 Process with pid 105207 is not found 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.637 12:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.541 12:16:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:42.541 00:29:42.541 real 0m47.580s 00:29:42.541 user 1m22.390s 00:29:42.541 sys 0m13.526s 00:29:42.541 12:16:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:42.542 ************************************ 00:29:42.542 END TEST nvmf_digest 00:29:42.542 ************************************ 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.542 ************************************ 00:29:42.542 START TEST nvmf_bdevperf 00:29:42.542 ************************************ 00:29:42.542 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:42.800 * Looking for test storage... 00:29:42.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.800 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.801 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.801 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:42.801 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:42.801 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:42.801 12:16:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:49.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:49.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:49.370 Found net devices under 0000:af:00.0: cvl_0_0 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:49.370 Found net devices under 0000:af:00.1: cvl_0_1 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:29:49.370 00:29:49.370 --- 10.0.0.2 ping statistics --- 00:29:49.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.370 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:49.370 00:29:49.370 --- 10.0.0.1 ping statistics --- 00:29:49.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.370 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:49.370 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=112524 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 112524 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 112524 ']' 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:49.371 12:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.371 [2024-07-25 12:16:25.834016] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:49.371 [2024-07-25 12:16:25.834072] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.371 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.371 [2024-07-25 12:16:25.920000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:49.371 [2024-07-25 12:16:26.027766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.371 [2024-07-25 12:16:26.027811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.371 [2024-07-25 12:16:26.027825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.371 [2024-07-25 12:16:26.027836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.371 [2024-07-25 12:16:26.027846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.371 [2024-07-25 12:16:26.027964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.371 [2024-07-25 12:16:26.028076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.371 [2024-07-25 12:16:26.028074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.630 [2024-07-25 12:16:26.832753] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.630 Malloc0 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.630 [2024-07-25 12:16:26.904741] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.630 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.630 { 00:29:49.630 "params": { 00:29:49.630 "name": "Nvme$subsystem", 00:29:49.630 "trtype": "$TEST_TRANSPORT", 00:29:49.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.630 "adrfam": "ipv4", 00:29:49.630 "trsvcid": "$NVMF_PORT", 00:29:49.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.630 "hdgst": ${hdgst:-false}, 00:29:49.630 "ddgst": ${ddgst:-false} 00:29:49.630 }, 00:29:49.630 "method": "bdev_nvme_attach_controller" 00:29:49.630 } 00:29:49.630 EOF 00:29:49.631 )") 00:29:49.631 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:49.631 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:49.631 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:49.631 12:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:49.631 "params": { 00:29:49.631 "name": "Nvme1", 00:29:49.631 "trtype": "tcp", 00:29:49.631 "traddr": "10.0.0.2", 00:29:49.631 "adrfam": "ipv4", 00:29:49.631 "trsvcid": "4420", 00:29:49.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.631 "hdgst": false, 00:29:49.631 "ddgst": false 00:29:49.631 }, 00:29:49.631 "method": "bdev_nvme_attach_controller" 00:29:49.631 }' 00:29:49.889 [2024-07-25 12:16:26.960612] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:49.889 [2024-07-25 12:16:26.960669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112679 ] 00:29:49.889 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.889 [2024-07-25 12:16:27.042392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.889 [2024-07-25 12:16:27.128931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.150 Running I/O for 1 seconds... 00:29:51.178 00:29:51.178 Latency(us) 00:29:51.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:51.178 Verification LBA range: start 0x0 length 0x4000 00:29:51.178 Nvme1n1 : 1.01 6331.68 24.73 0.00 0.00 20116.95 2129.92 16801.05 00:29:51.178 =================================================================================================================== 00:29:51.178 Total : 6331.68 24.73 0.00 0.00 20116.95 2129.92 16801.05 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=112946 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.437 { 00:29:51.437 "params": { 00:29:51.437 "name": "Nvme$subsystem", 00:29:51.437 "trtype": "$TEST_TRANSPORT", 00:29:51.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.437 "adrfam": "ipv4", 00:29:51.437 "trsvcid": "$NVMF_PORT", 00:29:51.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.437 "hdgst": ${hdgst:-false}, 00:29:51.437 "ddgst": ${ddgst:-false} 00:29:51.437 }, 00:29:51.437 "method": "bdev_nvme_attach_controller" 00:29:51.437 } 00:29:51.437 EOF 00:29:51.437 )") 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:51.437 12:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.437 "params": { 00:29:51.437 "name": "Nvme1", 00:29:51.437 "trtype": "tcp", 00:29:51.437 "traddr": "10.0.0.2", 00:29:51.437 "adrfam": "ipv4", 00:29:51.437 "trsvcid": "4420", 00:29:51.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.437 "hdgst": false, 00:29:51.437 "ddgst": false 00:29:51.437 }, 00:29:51.437 "method": "bdev_nvme_attach_controller" 00:29:51.437 }' 00:29:51.437 [2024-07-25 12:16:28.603948] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:51.437 [2024-07-25 12:16:28.604011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112946 ] 00:29:51.437 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.437 [2024-07-25 12:16:28.683395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.696 [2024-07-25 12:16:28.769658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.955 Running I/O for 15 seconds... 00:29:54.492 12:16:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 112524 00:29:54.492 12:16:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:54.492 [2024-07-25 12:16:31.573431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.573974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.573989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.574018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.574049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.574079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.574107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.574138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.574161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.492 [2024-07-25 12:16:31.574183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.492 [2024-07-25 12:16:31.574193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.574986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.574998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.575008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.575019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.493 [2024-07-25 12:16:31.575029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.493 [2024-07-25 12:16:31.575041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.494 [2024-07-25 12:16:31.575799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.494 [2024-07-25 12:16:31.575821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.494 [2024-07-25 12:16:31.575877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.494 [2024-07-25 12:16:31.575887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.575901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.575911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.575922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.575932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.575944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.575954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.575966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.575976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.575987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.575997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.495 [2024-07-25 12:16:31.576484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27030 is same with the state(5) to be set 00:29:54.495 [2024-07-25 12:16:31.576507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:54.495 [2024-07-25 12:16:31.576515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:54.495 [2024-07-25 12:16:31.576524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122280 len:8 PRP1 0x0 PRP2 0x0 00:29:54.495 [2024-07-25 12:16:31.576535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.495 [2024-07-25 12:16:31.576585] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa27030 was disconnected and freed. reset controller. 00:29:54.495 [2024-07-25 12:16:31.580877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.495 [2024-07-25 12:16:31.580946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.495 [2024-07-25 12:16:31.581644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.495 [2024-07-25 12:16:31.581666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.495 [2024-07-25 12:16:31.581678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.495 [2024-07-25 12:16:31.581943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.495 [2024-07-25 12:16:31.582208] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.495 [2024-07-25 12:16:31.582219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.495 [2024-07-25 12:16:31.582229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.495 [2024-07-25 12:16:31.586469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.495 [2024-07-25 12:16:31.595750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.495 [2024-07-25 12:16:31.596201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.495 [2024-07-25 12:16:31.596225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.495 [2024-07-25 12:16:31.596235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.495 [2024-07-25 12:16:31.596499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.495 [2024-07-25 12:16:31.596771] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.495 [2024-07-25 12:16:31.596784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.495 [2024-07-25 12:16:31.596793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.495 [2024-07-25 12:16:31.601040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.495 [2024-07-25 12:16:31.610320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.495 [2024-07-25 12:16:31.610887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.610910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.610920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.611184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.611450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.611462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.611471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.615717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.624976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.625575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.625628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.625651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.625996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.626261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.626273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.626282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.630526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.639566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.640151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.640194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.640215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.640808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.641093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.641104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.641113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.645362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.654138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.654759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.654804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.654825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.655318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.655582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.655594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.655611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.659847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.668869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.669507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.669551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.669572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.670124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.670389] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.670401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.670410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.674657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.683417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.683862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.683884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.683893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.684156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.684421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.684432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.684442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.688685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.697970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.698530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.698574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.698595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.699139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.699404] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.699415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.699429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.703683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.712710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.713305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.713347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.713369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.713962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.714406] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.714418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.714427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.496 [2024-07-25 12:16:31.720709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.496 [2024-07-25 12:16:31.727662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.496 [2024-07-25 12:16:31.728148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.496 [2024-07-25 12:16:31.728170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.496 [2024-07-25 12:16:31.728180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.496 [2024-07-25 12:16:31.728444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.496 [2024-07-25 12:16:31.728719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.496 [2024-07-25 12:16:31.728732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.496 [2024-07-25 12:16:31.728741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.497 [2024-07-25 12:16:31.732982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.497 [2024-07-25 12:16:31.742260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.497 [2024-07-25 12:16:31.742884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.497 [2024-07-25 12:16:31.742906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.497 [2024-07-25 12:16:31.742916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.497 [2024-07-25 12:16:31.743180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.497 [2024-07-25 12:16:31.743444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.497 [2024-07-25 12:16:31.743455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.497 [2024-07-25 12:16:31.743464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.497 [2024-07-25 12:16:31.747716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.497 [2024-07-25 12:16:31.756977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.497 [2024-07-25 12:16:31.757543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.497 [2024-07-25 12:16:31.757569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.497 [2024-07-25 12:16:31.757579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.497 [2024-07-25 12:16:31.757849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.497 [2024-07-25 12:16:31.758114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.497 [2024-07-25 12:16:31.758125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.497 [2024-07-25 12:16:31.758134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.497 [2024-07-25 12:16:31.762374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.497 [2024-07-25 12:16:31.771650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.497 [2024-07-25 12:16:31.772133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.497 [2024-07-25 12:16:31.772155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.497 [2024-07-25 12:16:31.772165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.497 [2024-07-25 12:16:31.772428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.497 [2024-07-25 12:16:31.772697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.497 [2024-07-25 12:16:31.772709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.497 [2024-07-25 12:16:31.772719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.497 [2024-07-25 12:16:31.776961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.497 [2024-07-25 12:16:31.786230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.497 [2024-07-25 12:16:31.786793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.497 [2024-07-25 12:16:31.786815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.497 [2024-07-25 12:16:31.786825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.497 [2024-07-25 12:16:31.787089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.497 [2024-07-25 12:16:31.787354] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.497 [2024-07-25 12:16:31.787365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.497 [2024-07-25 12:16:31.787374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.757 [2024-07-25 12:16:31.791628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.757 [2024-07-25 12:16:31.800907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.757 [2024-07-25 12:16:31.801486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.757 [2024-07-25 12:16:31.801508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.757 [2024-07-25 12:16:31.801518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.757 [2024-07-25 12:16:31.801789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.757 [2024-07-25 12:16:31.802058] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.757 [2024-07-25 12:16:31.802070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.757 [2024-07-25 12:16:31.802079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.757 [2024-07-25 12:16:31.806312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.757 [2024-07-25 12:16:31.815597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.757 [2024-07-25 12:16:31.816140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.757 [2024-07-25 12:16:31.816161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.757 [2024-07-25 12:16:31.816171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.757 [2024-07-25 12:16:31.816434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.757 [2024-07-25 12:16:31.816705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.757 [2024-07-25 12:16:31.816717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.757 [2024-07-25 12:16:31.816727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.757 [2024-07-25 12:16:31.820964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.757 [2024-07-25 12:16:31.830232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.757 [2024-07-25 12:16:31.830757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.757 [2024-07-25 12:16:31.830780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.757 [2024-07-25 12:16:31.830791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.757 [2024-07-25 12:16:31.831054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.757 [2024-07-25 12:16:31.831320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.757 [2024-07-25 12:16:31.831332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.757 [2024-07-25 12:16:31.831341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.757 [2024-07-25 12:16:31.835591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.757 [2024-07-25 12:16:31.844861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.757 [2024-07-25 12:16:31.845454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.757 [2024-07-25 12:16:31.845475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.757 [2024-07-25 12:16:31.845485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.757 [2024-07-25 12:16:31.845756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.757 [2024-07-25 12:16:31.846020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.757 [2024-07-25 12:16:31.846032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.757 [2024-07-25 12:16:31.846041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.757 [2024-07-25 12:16:31.850281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.757 [2024-07-25 12:16:31.859558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.757 [2024-07-25 12:16:31.859994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.757 [2024-07-25 12:16:31.860016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.757 [2024-07-25 12:16:31.860026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.757 [2024-07-25 12:16:31.860289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.757 [2024-07-25 12:16:31.860553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.757 [2024-07-25 12:16:31.860565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.757 [2024-07-25 12:16:31.860574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.757 [2024-07-25 12:16:31.864821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.757 [2024-07-25 12:16:31.874339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.757 [2024-07-25 12:16:31.874934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.757 [2024-07-25 12:16:31.874956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.874966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.875229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.875494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.875506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.875515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.879760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.889020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.889623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.889665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.889686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.890271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.890784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.890797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.890806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.895053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.903579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.904098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.904141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.904170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.904688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.904953] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.904965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.904974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.909207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.918280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.918769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.918792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.918802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.919065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.919330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.919342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.919351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.923591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.932855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.933465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.933508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.933530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.934122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.934712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.934737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.934757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.939027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.947547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.948149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.948191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.948212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.948750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.949014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.949030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.949039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.953276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.962279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.962799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.962843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.962864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.963441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.964011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.964024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.964033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.968269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.977035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.977620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.977642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.977652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.977916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.978181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.978192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.978202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.982451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:31.991726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:31.992348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:31.992370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:31.992380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:31.992653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:31.992918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:31.992930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:31.992940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:31.997183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:32.006450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:32.006972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:32.006994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.758 [2024-07-25 12:16:32.007004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.758 [2024-07-25 12:16:32.007267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.758 [2024-07-25 12:16:32.007531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.758 [2024-07-25 12:16:32.007543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.758 [2024-07-25 12:16:32.007552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.758 [2024-07-25 12:16:32.011813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.758 [2024-07-25 12:16:32.021085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.758 [2024-07-25 12:16:32.021678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.758 [2024-07-25 12:16:32.021722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.759 [2024-07-25 12:16:32.021744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.759 [2024-07-25 12:16:32.022257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.759 [2024-07-25 12:16:32.022522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.759 [2024-07-25 12:16:32.022534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.759 [2024-07-25 12:16:32.022543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.759 [2024-07-25 12:16:32.026789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.759 [2024-07-25 12:16:32.035802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.759 [2024-07-25 12:16:32.036366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.759 [2024-07-25 12:16:32.036409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.759 [2024-07-25 12:16:32.036430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.759 [2024-07-25 12:16:32.036979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.759 [2024-07-25 12:16:32.037243] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.759 [2024-07-25 12:16:32.037255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.759 [2024-07-25 12:16:32.037264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.759 [2024-07-25 12:16:32.041513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.759 [2024-07-25 12:16:32.050526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.759 [2024-07-25 12:16:32.051134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.759 [2024-07-25 12:16:32.051177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:54.759 [2024-07-25 12:16:32.051199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:54.759 [2024-07-25 12:16:32.051794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:54.759 [2024-07-25 12:16:32.052097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.759 [2024-07-25 12:16:32.052108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.759 [2024-07-25 12:16:32.052117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.759 [2024-07-25 12:16:32.056361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.019 [2024-07-25 12:16:32.065128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.019 [2024-07-25 12:16:32.065695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.019 [2024-07-25 12:16:32.065738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.019 [2024-07-25 12:16:32.065760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.019 [2024-07-25 12:16:32.066222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.019 [2024-07-25 12:16:32.066487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.019 [2024-07-25 12:16:32.066498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.019 [2024-07-25 12:16:32.066507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.019 [2024-07-25 12:16:32.070760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.019 [2024-07-25 12:16:32.079770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.019 [2024-07-25 12:16:32.080348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.019 [2024-07-25 12:16:32.080390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.019 [2024-07-25 12:16:32.080411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.019 [2024-07-25 12:16:32.081011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.019 [2024-07-25 12:16:32.081277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.019 [2024-07-25 12:16:32.081288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.019 [2024-07-25 12:16:32.081297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.019 [2024-07-25 12:16:32.085530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.019 [2024-07-25 12:16:32.094543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.019 [2024-07-25 12:16:32.095141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.019 [2024-07-25 12:16:32.095184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.019 [2024-07-25 12:16:32.095205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.019 [2024-07-25 12:16:32.095795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.019 [2024-07-25 12:16:32.096278] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.019 [2024-07-25 12:16:32.096289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.019 [2024-07-25 12:16:32.096302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.019 [2024-07-25 12:16:32.100544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.019 [2024-07-25 12:16:32.109304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.019 [2024-07-25 12:16:32.109892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.019 [2024-07-25 12:16:32.109913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.019 [2024-07-25 12:16:32.109923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.019 [2024-07-25 12:16:32.110186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.019 [2024-07-25 12:16:32.110450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.019 [2024-07-25 12:16:32.110461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.019 [2024-07-25 12:16:32.110470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.019 [2024-07-25 12:16:32.114719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.019 [2024-07-25 12:16:32.123978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.019 [2024-07-25 12:16:32.124540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.019 [2024-07-25 12:16:32.124561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.019 [2024-07-25 12:16:32.124571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.019 [2024-07-25 12:16:32.124841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.019 [2024-07-25 12:16:32.125107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.019 [2024-07-25 12:16:32.125118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.019 [2024-07-25 12:16:32.125127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.019 [2024-07-25 12:16:32.129364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.019 [2024-07-25 12:16:32.138637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.019 [2024-07-25 12:16:32.139087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.019 [2024-07-25 12:16:32.139109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.019 [2024-07-25 12:16:32.139118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.019 [2024-07-25 12:16:32.139382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.019 [2024-07-25 12:16:32.139653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.019 [2024-07-25 12:16:32.139665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.019 [2024-07-25 12:16:32.139675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.019 [2024-07-25 12:16:32.143925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.019 [2024-07-25 12:16:32.153207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.019 [2024-07-25 12:16:32.153780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.019 [2024-07-25 12:16:32.153838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.019 [2024-07-25 12:16:32.153860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.019 [2024-07-25 12:16:32.154353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.019 [2024-07-25 12:16:32.154623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.154634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.154644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.158887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.167937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.168513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.168555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.168577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.169169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.169558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.169574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.169588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.175825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.182935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.183409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.183431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.183440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.183711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.183975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.183986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.183996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.188235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.197496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.198035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.198057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.198067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.198330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.198598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.198618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.198628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.202875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.212145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.212720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.212743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.212753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.213017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.213281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.213292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.213301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.217549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.226813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.227302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.227322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.227332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.227594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.227864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.227875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.227884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.232149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.241422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.241990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.242012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.242022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.242285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.242549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.242561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.242570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.246817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.256082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.256662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.256707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.256728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.257306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.257608] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.257620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.257629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.261873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.270628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.020 [2024-07-25 12:16:32.271179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.020 [2024-07-25 12:16:32.271201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.020 [2024-07-25 12:16:32.271211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.020 [2024-07-25 12:16:32.271474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.020 [2024-07-25 12:16:32.271746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.020 [2024-07-25 12:16:32.271758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.020 [2024-07-25 12:16:32.271767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.020 [2024-07-25 12:16:32.276006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.020 [2024-07-25 12:16:32.285254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.021 [2024-07-25 12:16:32.285821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.021 [2024-07-25 12:16:32.285864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.021 [2024-07-25 12:16:32.285885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.021 [2024-07-25 12:16:32.286450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.021 [2024-07-25 12:16:32.286722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.021 [2024-07-25 12:16:32.286734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.021 [2024-07-25 12:16:32.286744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.021 [2024-07-25 12:16:32.290980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.021 [2024-07-25 12:16:32.299980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.021 [2024-07-25 12:16:32.300518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.021 [2024-07-25 12:16:32.300539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.021 [2024-07-25 12:16:32.300553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.021 [2024-07-25 12:16:32.300824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.021 [2024-07-25 12:16:32.301089] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.021 [2024-07-25 12:16:32.301101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.021 [2024-07-25 12:16:32.301110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.021 [2024-07-25 12:16:32.305354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.021 [2024-07-25 12:16:32.314631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.021 [2024-07-25 12:16:32.315199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.021 [2024-07-25 12:16:32.315243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.021 [2024-07-25 12:16:32.315264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.021 [2024-07-25 12:16:32.315747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.021 [2024-07-25 12:16:32.316013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.021 [2024-07-25 12:16:32.316025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.021 [2024-07-25 12:16:32.316034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.320279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.329280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.329779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.329801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.329812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.330075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.330340] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.330352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.330361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.334617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.343895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.344487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.344509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.344519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.344790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.345056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.345071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.345080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.349331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.358610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.359211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.359254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.359275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.359819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.360083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.360095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.360104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.364338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.373345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.373939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.373983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.374004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.374582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.374892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.374904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.374913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.379145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.387895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.388480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.388501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.388511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.388781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.389046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.389058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.389067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.393312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.402566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.403163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.403205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.403226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.403771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.404036] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.404048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.404056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.408289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.417297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.417882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.417935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.417956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.418535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.418915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.418928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.418937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.423174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.431925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.432516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.432558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-25 12:16:32.432579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.281 [2024-07-25 12:16:32.433104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.281 [2024-07-25 12:16:32.433369] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-25 12:16:32.433380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-25 12:16:32.433389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-25 12:16:32.437630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-25 12:16:32.446638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-25 12:16:32.447195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-25 12:16:32.447239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.447260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.447824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.448089] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.448101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.448110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.452341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.461338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.461931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.461952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.461962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.462225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.462488] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.462500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.462509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.466750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.475997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.476588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.476643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.476664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.477183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.477448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.477459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.477468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.481716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.490723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.491295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.491316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.491326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.491589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.491859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.491872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.491885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.496123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.505383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.505955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.505998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.506019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.506598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.507178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.507189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.507199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.511445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.519959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.520550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.520593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.520629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.521166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.521430] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.521441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.521450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.525690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.534690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.535278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.535299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.535309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.535573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.535844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.535856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.535866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.540100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.549367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.549961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.550013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.550035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.550524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.550795] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.550808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.550817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.555079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.564083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.564675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.564698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.564708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.564971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.565235] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.565246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.565255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-25 12:16:32.569498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-25 12:16:32.578753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-25 12:16:32.579363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-25 12:16:32.579384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-25 12:16:32.579394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.282 [2024-07-25 12:16:32.579663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.282 [2024-07-25 12:16:32.579929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-25 12:16:32.579940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-25 12:16:32.579949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.584189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.593437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.594019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.594062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.594083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.594675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.595263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.595287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.595308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.599673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.608181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.608769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.608792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.608802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.609065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.609329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.609341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.609350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.613595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.622851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.623359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.623380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.623390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.623661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.623925] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.623936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.623945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.628179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.637434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.637952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.637994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.638015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.638565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.638836] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.638849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.638858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.643101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.652103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.652609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.652631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.652641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.652904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.653168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.653180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.653189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.657428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.666679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.667271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.667313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.667333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.667798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.668063] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.668074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.668084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.672316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.681320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.681767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.681788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.681798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.682061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.682326] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.682338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.682347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.686587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.695868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.696414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.696457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.696486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.696932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.697198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.697209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.543 [2024-07-25 12:16:32.697219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.543 [2024-07-25 12:16:32.701458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.543 [2024-07-25 12:16:32.710476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.543 [2024-07-25 12:16:32.710988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.543 [2024-07-25 12:16:32.711010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.543 [2024-07-25 12:16:32.711020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.543 [2024-07-25 12:16:32.711284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.543 [2024-07-25 12:16:32.711548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.543 [2024-07-25 12:16:32.711560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.711569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.715809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.725065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.725633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.725655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.725665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.725928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.726193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.726205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.726214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.730455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.739710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.740259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.740301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.740323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.740915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.741201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.741217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.741227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.745459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.754455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.755022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.755044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.755054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.755318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.755582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.755594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.755610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.759843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.769094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.769686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.769708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.769717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.769980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.770244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.770255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.770264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.774498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.783759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.784344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.784365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.784375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.784645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.784909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.784921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.784930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.789169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.798424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.799021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.799064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.799085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.799678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.800113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.800124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.800134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.804380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.813129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.813716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.813738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.813748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.814011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.814275] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.814286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.814296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.818535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.544 [2024-07-25 12:16:32.827755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.544 [2024-07-25 12:16:32.828344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.544 [2024-07-25 12:16:32.828390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.544 [2024-07-25 12:16:32.828411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.544 [2024-07-25 12:16:32.828941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.544 [2024-07-25 12:16:32.829206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.544 [2024-07-25 12:16:32.829217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.544 [2024-07-25 12:16:32.829226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.544 [2024-07-25 12:16:32.833463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.842470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.843100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.843144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.805 [2024-07-25 12:16:32.843165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.805 [2024-07-25 12:16:32.843687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.805 [2024-07-25 12:16:32.843952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.805 [2024-07-25 12:16:32.843963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.805 [2024-07-25 12:16:32.843972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.805 [2024-07-25 12:16:32.848211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.857229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.857806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.857849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.805 [2024-07-25 12:16:32.857871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.805 [2024-07-25 12:16:32.858448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.805 [2024-07-25 12:16:32.859041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.805 [2024-07-25 12:16:32.859067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.805 [2024-07-25 12:16:32.859087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.805 [2024-07-25 12:16:32.863384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.871888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.872487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.872531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.805 [2024-07-25 12:16:32.872552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.805 [2024-07-25 12:16:32.873079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.805 [2024-07-25 12:16:32.873344] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.805 [2024-07-25 12:16:32.873356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.805 [2024-07-25 12:16:32.873365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.805 [2024-07-25 12:16:32.877593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.886590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.887151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.887172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.805 [2024-07-25 12:16:32.887182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.805 [2024-07-25 12:16:32.887445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.805 [2024-07-25 12:16:32.887716] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.805 [2024-07-25 12:16:32.887729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.805 [2024-07-25 12:16:32.887746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.805 [2024-07-25 12:16:32.891987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.901242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.901826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.901849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.805 [2024-07-25 12:16:32.901858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.805 [2024-07-25 12:16:32.902121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.805 [2024-07-25 12:16:32.902385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.805 [2024-07-25 12:16:32.902396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.805 [2024-07-25 12:16:32.902406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.805 [2024-07-25 12:16:32.906661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.915935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.916516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.916537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.805 [2024-07-25 12:16:32.916547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.805 [2024-07-25 12:16:32.916818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.805 [2024-07-25 12:16:32.917082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.805 [2024-07-25 12:16:32.917093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.805 [2024-07-25 12:16:32.917103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.805 [2024-07-25 12:16:32.921341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.930592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.931182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.931230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.805 [2024-07-25 12:16:32.931252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.805 [2024-07-25 12:16:32.931841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.805 [2024-07-25 12:16:32.932163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.805 [2024-07-25 12:16:32.932174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.805 [2024-07-25 12:16:32.932183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.805 [2024-07-25 12:16:32.936414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.805 [2024-07-25 12:16:32.945346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.805 [2024-07-25 12:16:32.945952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.805 [2024-07-25 12:16:32.946004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:32.946026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:32.946616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:32.947210] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:32.947221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:32.947231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:32.951465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:32.959962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:32.960553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:32.960596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:32.960633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:32.961136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:32.961401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:32.961413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:32.961422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:32.965665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:32.974670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:32.975257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:32.975299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:32.975320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:32.975918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:32.976435] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:32.976447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:32.976456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:32.980692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:32.989441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:32.990009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:32.990030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:32.990040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:32.990304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:32.990571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:32.990582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:32.990592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:32.994837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:33.004095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:33.004681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:33.004702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:33.004712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:33.004975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:33.005247] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:33.005259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:33.005268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:33.009508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:33.018768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:33.019362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:33.019405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:33.019426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:33.020019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:33.020345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:33.020357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:33.020368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:33.024610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:33.033364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:33.033954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:33.033977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:33.033987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:33.034251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:33.034517] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:33.034530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:33.034540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:33.038787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:33.048047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:33.048640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:33.048683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:33.048706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:33.049199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:33.049464] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:33.049477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:33.049487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:33.055551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:33.063112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:33.063629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:33.063652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:33.063663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:33.063927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:33.064192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:33.064205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:33.064215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:33.068458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:33.077713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:33.078306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.806 [2024-07-25 12:16:33.078349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.806 [2024-07-25 12:16:33.078371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.806 [2024-07-25 12:16:33.078962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.806 [2024-07-25 12:16:33.079508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.806 [2024-07-25 12:16:33.079522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.806 [2024-07-25 12:16:33.079533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.806 [2024-07-25 12:16:33.085294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.806 [2024-07-25 12:16:33.093050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.806 [2024-07-25 12:16:33.093659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.807 [2024-07-25 12:16:33.093682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:55.807 [2024-07-25 12:16:33.093696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:55.807 [2024-07-25 12:16:33.093961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:55.807 [2024-07-25 12:16:33.094228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.807 [2024-07-25 12:16:33.094241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.807 [2024-07-25 12:16:33.094251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.807 [2024-07-25 12:16:33.098494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.067 [2024-07-25 12:16:33.107776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.067 [2024-07-25 12:16:33.108289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-25 12:16:33.108312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.067 [2024-07-25 12:16:33.108322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.067 [2024-07-25 12:16:33.108587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.067 [2024-07-25 12:16:33.108861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.067 [2024-07-25 12:16:33.108874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.067 [2024-07-25 12:16:33.108885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.067 [2024-07-25 12:16:33.113140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.067 [2024-07-25 12:16:33.122403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.067 [2024-07-25 12:16:33.122992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-25 12:16:33.123035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.067 [2024-07-25 12:16:33.123057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.067 [2024-07-25 12:16:33.123565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.067 [2024-07-25 12:16:33.123837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.067 [2024-07-25 12:16:33.123850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.067 [2024-07-25 12:16:33.123860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.067 [2024-07-25 12:16:33.128097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.067 [2024-07-25 12:16:33.137110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.067 [2024-07-25 12:16:33.137541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-25 12:16:33.137564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.067 [2024-07-25 12:16:33.137574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.067 [2024-07-25 12:16:33.137846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.067 [2024-07-25 12:16:33.138111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.067 [2024-07-25 12:16:33.138127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.067 [2024-07-25 12:16:33.138137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.067 [2024-07-25 12:16:33.142393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.067 [2024-07-25 12:16:33.151669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.067 [2024-07-25 12:16:33.152202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-25 12:16:33.152225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.067 [2024-07-25 12:16:33.152236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.067 [2024-07-25 12:16:33.152500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.067 [2024-07-25 12:16:33.152772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.067 [2024-07-25 12:16:33.152786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.067 [2024-07-25 12:16:33.152796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.067 [2024-07-25 12:16:33.157039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.067 [2024-07-25 12:16:33.166304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.067 [2024-07-25 12:16:33.166798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-25 12:16:33.166821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.067 [2024-07-25 12:16:33.166831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.067 [2024-07-25 12:16:33.167095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.067 [2024-07-25 12:16:33.167361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.067 [2024-07-25 12:16:33.167374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.067 [2024-07-25 12:16:33.167383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.067 [2024-07-25 12:16:33.171632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.067 [2024-07-25 12:16:33.180889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.067 [2024-07-25 12:16:33.181401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-25 12:16:33.181423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.067 [2024-07-25 12:16:33.181434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.067 [2024-07-25 12:16:33.181705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.067 [2024-07-25 12:16:33.181971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.067 [2024-07-25 12:16:33.181985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.067 [2024-07-25 12:16:33.181994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.067 [2024-07-25 12:16:33.186243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.067 [2024-07-25 12:16:33.195521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.067 [2024-07-25 12:16:33.195967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-25 12:16:33.195989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.067 [2024-07-25 12:16:33.195999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.067 [2024-07-25 12:16:33.196265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.067 [2024-07-25 12:16:33.196530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.196543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.196553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.200805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.210088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.210650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.210672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.210683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.210948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.211213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.211227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.211237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.215469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.224741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.225254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.225277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.225287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.225551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.225824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.225838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.225847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.230092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.239362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.239878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.239900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.239911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.240180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.240445] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.240458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.240468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.244723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.253996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.254584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.254612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.254624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.254888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.255154] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.255168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.255178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.259416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.268680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.269261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.269283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.269294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.269559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.269833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.269847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.269857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.274095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.283354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.283919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.283942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.283952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.284218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.284483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.284496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.284511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.288761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.298046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.298563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.298586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.298596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.298867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.299134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.299146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.299157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.303395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.312675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.313182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.313204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.313214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.313478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.313750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.313764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.313775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.318015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.327275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.327859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.327882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.327892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.328157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.328422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.068 [2024-07-25 12:16:33.328435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.068 [2024-07-25 12:16:33.328445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.068 [2024-07-25 12:16:33.332688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.068 [2024-07-25 12:16:33.341949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.068 [2024-07-25 12:16:33.342512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.068 [2024-07-25 12:16:33.342533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.068 [2024-07-25 12:16:33.342544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.068 [2024-07-25 12:16:33.342814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.068 [2024-07-25 12:16:33.343080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.069 [2024-07-25 12:16:33.343093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.069 [2024-07-25 12:16:33.343102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.069 [2024-07-25 12:16:33.347350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.069 [2024-07-25 12:16:33.356610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.069 [2024-07-25 12:16:33.357173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.069 [2024-07-25 12:16:33.357195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.069 [2024-07-25 12:16:33.357205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.069 [2024-07-25 12:16:33.357470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.069 [2024-07-25 12:16:33.357741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.069 [2024-07-25 12:16:33.357755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.069 [2024-07-25 12:16:33.357765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.069 [2024-07-25 12:16:33.362005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.371274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.371922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.371946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.371956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.372221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.372487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.372500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.372510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.376759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.386007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.386592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.386621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.386632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.386896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.387166] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.387179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.387188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.391430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.400700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.401313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.401335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.401345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.401617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.401883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.401895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.401905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.406143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.415423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.416007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.416030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.416041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.416305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.416569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.416582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.416593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.420837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.430095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.430679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.430702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.430712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.430977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.431243] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.431256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.431265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.435506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.444767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.445269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.445291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.445302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.445567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.445839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.445853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.445863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.450099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.459362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.459960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.459983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.459994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.460258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.460522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.460535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.460544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.464780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.474049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.474563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.474585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.474595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.474866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.475133] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.475145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.328 [2024-07-25 12:16:33.475155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.328 [2024-07-25 12:16:33.479396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.328 [2024-07-25 12:16:33.488664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.328 [2024-07-25 12:16:33.489254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.328 [2024-07-25 12:16:33.489276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.328 [2024-07-25 12:16:33.489290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.328 [2024-07-25 12:16:33.489553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.328 [2024-07-25 12:16:33.489824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.328 [2024-07-25 12:16:33.489838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.489848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.494093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.503368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.503961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.503984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.503994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.504259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.504525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.504538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.504548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.508799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.518067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.518626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.518650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.518660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.518924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.519190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.519203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.519213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.523452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.532714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.533305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.533327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.533337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.533609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.533875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.533891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.533902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.538135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.547394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.547995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.548018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.548029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.548293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.548558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.548571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.548581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.552827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.562090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.562653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.562675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.562686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.562950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.563216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.563229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.563239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.567477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.576738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.577316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.577338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.577349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.577618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.577884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.577898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.577907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.582144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.591406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.591979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.592022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.592045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.592610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.592877] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.592890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.592900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.597134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.606141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.606732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.606775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.606797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.607376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.607794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.607807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.607817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.612058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.329 [2024-07-25 12:16:33.620929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.329 [2024-07-25 12:16:33.621445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.329 [2024-07-25 12:16:33.621467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.329 [2024-07-25 12:16:33.621477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.329 [2024-07-25 12:16:33.621748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.329 [2024-07-25 12:16:33.622015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.329 [2024-07-25 12:16:33.622029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.329 [2024-07-25 12:16:33.622038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.329 [2024-07-25 12:16:33.626277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.589 [2024-07-25 12:16:33.635541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.589 [2024-07-25 12:16:33.636104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-25 12:16:33.636127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.589 [2024-07-25 12:16:33.636137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.589 [2024-07-25 12:16:33.636409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.589 [2024-07-25 12:16:33.636684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.589 [2024-07-25 12:16:33.636698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.589 [2024-07-25 12:16:33.636708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.589 [2024-07-25 12:16:33.640939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.589 [2024-07-25 12:16:33.650195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.589 [2024-07-25 12:16:33.650708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-25 12:16:33.650732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.589 [2024-07-25 12:16:33.650743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.589 [2024-07-25 12:16:33.651008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.589 [2024-07-25 12:16:33.651273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.589 [2024-07-25 12:16:33.651285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.589 [2024-07-25 12:16:33.651294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.589 [2024-07-25 12:16:33.655535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.589 [2024-07-25 12:16:33.664798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.589 [2024-07-25 12:16:33.665386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-25 12:16:33.665428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.589 [2024-07-25 12:16:33.665450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.589 [2024-07-25 12:16:33.666010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.589 [2024-07-25 12:16:33.666276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.589 [2024-07-25 12:16:33.666288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.589 [2024-07-25 12:16:33.666298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.589 [2024-07-25 12:16:33.670530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.589 [2024-07-25 12:16:33.679527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.589 [2024-07-25 12:16:33.680123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-25 12:16:33.680167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.589 [2024-07-25 12:16:33.680188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.589 [2024-07-25 12:16:33.680763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.589 [2024-07-25 12:16:33.681030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.589 [2024-07-25 12:16:33.681043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.589 [2024-07-25 12:16:33.681057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.589 [2024-07-25 12:16:33.685293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.589 [2024-07-25 12:16:33.694299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.589 [2024-07-25 12:16:33.694837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-25 12:16:33.694860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.589 [2024-07-25 12:16:33.694870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.589 [2024-07-25 12:16:33.695135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.589 [2024-07-25 12:16:33.695401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.695413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.695423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.699666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.708954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.709533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.709577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.709599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.710152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.710418] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.710431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.710440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.714688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.723705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.724275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.724318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.724339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.724896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.725163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.725175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.725186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.729415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.738414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.739009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.739031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.739042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.739307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.739573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.739585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.739595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.743843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.753091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.753687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.753731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.753753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.754331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.754634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.754648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.754658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.758899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.767655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.768166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.768209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.768231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.768825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.769144] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.769157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.769167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.773406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.782414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.783007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.783051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.783073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.783501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.783776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.783790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.783800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.788038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.797044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.797612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.797635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.797645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.797910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.798176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.798189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.798199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.802440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.811715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.812332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.812376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.812399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.812968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.813234] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.813248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.813258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.817496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.826487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.827107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.827153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.827175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.827769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.828353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.828385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.828398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.834245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.841614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.842199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.842223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.842234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.842499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.842774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.842790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.842800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.847039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.856291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.856894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.856940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.856961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.857482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.857755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.857768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.857779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.862011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.871021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.871601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.871630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.871641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.871906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.872171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.872184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.590 [2024-07-25 12:16:33.872194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.590 [2024-07-25 12:16:33.876428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.590 [2024-07-25 12:16:33.885686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.590 [2024-07-25 12:16:33.886301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-25 12:16:33.886344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.590 [2024-07-25 12:16:33.886373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.590 [2024-07-25 12:16:33.886966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.590 [2024-07-25 12:16:33.887549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.590 [2024-07-25 12:16:33.887563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.591 [2024-07-25 12:16:33.887572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.850 [2024-07-25 12:16:33.891822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.850 [2024-07-25 12:16:33.900340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.850 [2024-07-25 12:16:33.900933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-07-25 12:16:33.900955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.850 [2024-07-25 12:16:33.900965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.850 [2024-07-25 12:16:33.901228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.850 [2024-07-25 12:16:33.901495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:33.901508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:33.901518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:33.905765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.851 [2024-07-25 12:16:33.915045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.851 [2024-07-25 12:16:33.915636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-07-25 12:16:33.915659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.851 [2024-07-25 12:16:33.915670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.851 [2024-07-25 12:16:33.915934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.851 [2024-07-25 12:16:33.916200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:33.916213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:33.916223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:33.920471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.851 [2024-07-25 12:16:33.929731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.851 [2024-07-25 12:16:33.930288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-07-25 12:16:33.930310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.851 [2024-07-25 12:16:33.930321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.851 [2024-07-25 12:16:33.930586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.851 [2024-07-25 12:16:33.930858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:33.930875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:33.930885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:33.935122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.851 [2024-07-25 12:16:33.944379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.851 [2024-07-25 12:16:33.944968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-07-25 12:16:33.944990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.851 [2024-07-25 12:16:33.945001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.851 [2024-07-25 12:16:33.945265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.851 [2024-07-25 12:16:33.945531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:33.945544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:33.945553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:33.949800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.851 [2024-07-25 12:16:33.959049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.851 [2024-07-25 12:16:33.959854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-07-25 12:16:33.959900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.851 [2024-07-25 12:16:33.959923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.851 [2024-07-25 12:16:33.960501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.851 [2024-07-25 12:16:33.960957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:33.960976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:33.960990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:33.967219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.851 [2024-07-25 12:16:33.974311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.851 [2024-07-25 12:16:33.974868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-07-25 12:16:33.974891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.851 [2024-07-25 12:16:33.974902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.851 [2024-07-25 12:16:33.975166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.851 [2024-07-25 12:16:33.975431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:33.975444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:33.975453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:33.979696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.851 [2024-07-25 12:16:33.988959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.851 [2024-07-25 12:16:33.989518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-07-25 12:16:33.989540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.851 [2024-07-25 12:16:33.989551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.851 [2024-07-25 12:16:33.989823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.851 [2024-07-25 12:16:33.990089] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:33.990102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:33.990112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:33.994361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.851 [2024-07-25 12:16:34.003611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.851 [2024-07-25 12:16:34.004205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-07-25 12:16:34.004249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.851 [2024-07-25 12:16:34.004271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.851 [2024-07-25 12:16:34.004796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.851 [2024-07-25 12:16:34.005063] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.851 [2024-07-25 12:16:34.005075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.851 [2024-07-25 12:16:34.005086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.851 [2024-07-25 12:16:34.009334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.018346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.018918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-07-25 12:16:34.018941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.852 [2024-07-25 12:16:34.018951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.852 [2024-07-25 12:16:34.019215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.852 [2024-07-25 12:16:34.019480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.852 [2024-07-25 12:16:34.019493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.852 [2024-07-25 12:16:34.019503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.852 [2024-07-25 12:16:34.023752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.033020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.033559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-07-25 12:16:34.033613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.852 [2024-07-25 12:16:34.033637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.852 [2024-07-25 12:16:34.034089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.852 [2024-07-25 12:16:34.034356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.852 [2024-07-25 12:16:34.034369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.852 [2024-07-25 12:16:34.034378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.852 [2024-07-25 12:16:34.038625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.047640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.048212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-07-25 12:16:34.048234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.852 [2024-07-25 12:16:34.048244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.852 [2024-07-25 12:16:34.048507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.852 [2024-07-25 12:16:34.048779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.852 [2024-07-25 12:16:34.048792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.852 [2024-07-25 12:16:34.048802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.852 [2024-07-25 12:16:34.053041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.062295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.062865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-07-25 12:16:34.062910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.852 [2024-07-25 12:16:34.062931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.852 [2024-07-25 12:16:34.063508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.852 [2024-07-25 12:16:34.064021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.852 [2024-07-25 12:16:34.064035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.852 [2024-07-25 12:16:34.064044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.852 [2024-07-25 12:16:34.068279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.077026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.077628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-07-25 12:16:34.077672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.852 [2024-07-25 12:16:34.077694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.852 [2024-07-25 12:16:34.078121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.852 [2024-07-25 12:16:34.078387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.852 [2024-07-25 12:16:34.078400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.852 [2024-07-25 12:16:34.078413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.852 [2024-07-25 12:16:34.082651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.091633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.092228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-07-25 12:16:34.092271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.852 [2024-07-25 12:16:34.092292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.852 [2024-07-25 12:16:34.092861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.852 [2024-07-25 12:16:34.093130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.852 [2024-07-25 12:16:34.093144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.852 [2024-07-25 12:16:34.093153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.852 [2024-07-25 12:16:34.097389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.106391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.106953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-07-25 12:16:34.106975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.852 [2024-07-25 12:16:34.106986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.852 [2024-07-25 12:16:34.107250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.852 [2024-07-25 12:16:34.107515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.852 [2024-07-25 12:16:34.107528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.852 [2024-07-25 12:16:34.107537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.852 [2024-07-25 12:16:34.111800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.852 [2024-07-25 12:16:34.121074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.852 [2024-07-25 12:16:34.121651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-07-25 12:16:34.121673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.853 [2024-07-25 12:16:34.121683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.853 [2024-07-25 12:16:34.121947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.853 [2024-07-25 12:16:34.122212] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.853 [2024-07-25 12:16:34.122225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.853 [2024-07-25 12:16:34.122234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.853 [2024-07-25 12:16:34.126478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.853 [2024-07-25 12:16:34.135730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.853 [2024-07-25 12:16:34.136294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-07-25 12:16:34.136316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:56.853 [2024-07-25 12:16:34.136326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:56.853 [2024-07-25 12:16:34.136590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:56.853 [2024-07-25 12:16:34.136865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.853 [2024-07-25 12:16:34.136879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.853 [2024-07-25 12:16:34.136888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.853 [2024-07-25 12:16:34.141124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.150394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.150996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.151020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.151031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.151297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.151563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.151576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.151586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.155833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.165091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.165685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.165728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.165750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.166073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.166338] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.166351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.166361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.170598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.179854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.180438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.180460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.180470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.180741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.181011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.181024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.181034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.185277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.194537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.195127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.195149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.195159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.195423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.195697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.195711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.195721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.199952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.209201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.209763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.209805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.209825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.210411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.210694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.210708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.210718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.214954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.223949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.224557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.224599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.224636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.225116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.225382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.225395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.225405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.229646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.238642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.239243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.239287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.239309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.239901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.240440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.240453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.240464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.244711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.253206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.253713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.253736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.253747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.254012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.254277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.254290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.254300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.258542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.267790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.268307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.268329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.113 [2024-07-25 12:16:34.268339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.113 [2024-07-25 12:16:34.268612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.113 [2024-07-25 12:16:34.268880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.113 [2024-07-25 12:16:34.268893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.113 [2024-07-25 12:16:34.268902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.113 [2024-07-25 12:16:34.273145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.113 [2024-07-25 12:16:34.282398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.113 [2024-07-25 12:16:34.283006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.113 [2024-07-25 12:16:34.283048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.283077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.283553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.283827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.283841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.283852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.288084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.297083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.297678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.297700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.297710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.297974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.298237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.298250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.298260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.302498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.311786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.312329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.312351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.312362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.312633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.312901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.312914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.312924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.317173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.326434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.327020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.327064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.327086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.327617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.327891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.327908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.327918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.332152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.341151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.341766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.341809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.341831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.342411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.342729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.342743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.342754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.346991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.355745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.356260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.356282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.356292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.356556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.356827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.356841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.356851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.361091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.370363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.370960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.371004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.371026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.371538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.371811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.371824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.371834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.376084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.385109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.385598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.385629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.385640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.385904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.386170] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.386183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.386193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.390433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.114 [2024-07-25 12:16:34.399707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.114 [2024-07-25 12:16:34.400299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.114 [2024-07-25 12:16:34.400321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.114 [2024-07-25 12:16:34.400331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.114 [2024-07-25 12:16:34.400594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.114 [2024-07-25 12:16:34.400866] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.114 [2024-07-25 12:16:34.400880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.114 [2024-07-25 12:16:34.400889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.114 [2024-07-25 12:16:34.405141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.373 [2024-07-25 12:16:34.414419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.373 [2024-07-25 12:16:34.415008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.373 [2024-07-25 12:16:34.415031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.373 [2024-07-25 12:16:34.415041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.373 [2024-07-25 12:16:34.415307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.373 [2024-07-25 12:16:34.415572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.373 [2024-07-25 12:16:34.415585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.373 [2024-07-25 12:16:34.415595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.419842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.429101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.429687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.429709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.429719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.429986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.430251] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.430264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.430273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.434516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.443790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.444392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.444435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.444456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.444942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.445208] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.445221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.445231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.449474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.458463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.459060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.459083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.459093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.459358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.459631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.459644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.459655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.463890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.473144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.473735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.473779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.473800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.474377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.474978] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.474992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.475005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.479243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.487751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.488338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.488360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.488370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.488643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.488909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.488922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.488931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.493165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.502426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.502956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.503000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.503022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.503600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.504145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.504159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.504168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.508404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.517168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.517761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.517804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.517825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.518404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.519001] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.519026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.519047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.523324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.531821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.532385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.532406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.532417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.532690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.532956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.532968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.532979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.537224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.546473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.547087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.547130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.547151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.547743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.548279] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.548292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.548302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 [2024-07-25 12:16:34.552534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.374 [2024-07-25 12:16:34.561037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.374 [2024-07-25 12:16:34.561634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.374 [2024-07-25 12:16:34.561678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.374 [2024-07-25 12:16:34.561699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.374 [2024-07-25 12:16:34.562207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.374 [2024-07-25 12:16:34.562473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.374 [2024-07-25 12:16:34.562486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.374 [2024-07-25 12:16:34.562496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 112524 Killed "${NVMF_APP[@]}" "$@" 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.375 [2024-07-25 12:16:34.566738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=113996 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 113996 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 113996 ']' 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.375 12:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.375 [2024-07-25 12:16:34.575740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.375 [2024-07-25 12:16:34.576326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-07-25 12:16:34.576351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.375 [2024-07-25 12:16:34.576361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.375 [2024-07-25 12:16:34.576632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.375 [2024-07-25 12:16:34.576896] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.375 [2024-07-25 12:16:34.576908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.375 [2024-07-25 12:16:34.576917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.375 [2024-07-25 12:16:34.581167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.375 [2024-07-25 12:16:34.590439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.375 [2024-07-25 12:16:34.591000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-07-25 12:16:34.591022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.375 [2024-07-25 12:16:34.591031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.375 [2024-07-25 12:16:34.591294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.375 [2024-07-25 12:16:34.591559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.375 [2024-07-25 12:16:34.591572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.375 [2024-07-25 12:16:34.591582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.375 [2024-07-25 12:16:34.595847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.375 [2024-07-25 12:16:34.605137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.375 [2024-07-25 12:16:34.605623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-07-25 12:16:34.605646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.375 [2024-07-25 12:16:34.605657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.375 [2024-07-25 12:16:34.605926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.375 [2024-07-25 12:16:34.606192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.375 [2024-07-25 12:16:34.606205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.375 [2024-07-25 12:16:34.606215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.375 [2024-07-25 12:16:34.610464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.375 [2024-07-25 12:16:34.619764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.375 [2024-07-25 12:16:34.620359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-07-25 12:16:34.620383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.375 [2024-07-25 12:16:34.620393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.375 [2024-07-25 12:16:34.620665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.375 [2024-07-25 12:16:34.620931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.375 [2024-07-25 12:16:34.620945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.375 [2024-07-25 12:16:34.620955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.375 [2024-07-25 12:16:34.625202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.375 [2024-07-25 12:16:34.626287] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:57.375 [2024-07-25 12:16:34.626340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.375 [2024-07-25 12:16:34.634467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.375 [2024-07-25 12:16:34.634986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-07-25 12:16:34.635009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.375 [2024-07-25 12:16:34.635021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.375 [2024-07-25 12:16:34.635284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.375 [2024-07-25 12:16:34.635548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.375 [2024-07-25 12:16:34.635560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.375 [2024-07-25 12:16:34.635570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.375 [2024-07-25 12:16:34.639829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.375 [2024-07-25 12:16:34.649203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.375 [2024-07-25 12:16:34.649725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-07-25 12:16:34.649748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.375 [2024-07-25 12:16:34.649758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.375 [2024-07-25 12:16:34.650022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.375 [2024-07-25 12:16:34.650291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.375 [2024-07-25 12:16:34.650303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.375 [2024-07-25 12:16:34.650313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.375 [2024-07-25 12:16:34.654578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.375 [2024-07-25 12:16:34.663881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.375 [2024-07-25 12:16:34.664363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.375 [2024-07-25 12:16:34.664384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.375 [2024-07-25 12:16:34.664395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.375 [2024-07-25 12:16:34.664668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.375 [2024-07-25 12:16:34.664934] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.375 [2024-07-25 12:16:34.664946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.375 [2024-07-25 12:16:34.664955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.375 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.375 [2024-07-25 12:16:34.669212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.678491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.679068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.679090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.679100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.679365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.679636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.679648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.679658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.683900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.693168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.693682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.693703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.693714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.693981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.694247] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.694259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.694269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.698516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.707784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.708290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.708311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.708322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.708584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.708857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.708870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.708880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.713142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.716421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:57.636 [2024-07-25 12:16:34.722416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.722906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.722930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.722940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.723203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.723468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.723479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.723489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.727739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.737007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.737455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.737477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.737488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.737757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.738022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.738033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.738043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.742285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.751560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.752080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.752107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.752118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.752381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.752654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.752667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.752676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.756922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.766180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.766794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.766816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.766826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.767091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.767356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.767368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.767377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.771628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.636 [2024-07-25 12:16:34.780897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.636 [2024-07-25 12:16:34.781481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.636 [2024-07-25 12:16:34.781504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.636 [2024-07-25 12:16:34.781515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.636 [2024-07-25 12:16:34.781787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.636 [2024-07-25 12:16:34.782051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.636 [2024-07-25 12:16:34.782063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.636 [2024-07-25 12:16:34.782072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.636 [2024-07-25 12:16:34.786323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.795595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.796145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.796166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.796176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.796439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.796717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.796730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.796740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.800981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.810239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.810784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.810806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.810816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.811079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.811343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.811355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.811364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.815638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.819517] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.637 [2024-07-25 12:16:34.819557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.637 [2024-07-25 12:16:34.819570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.637 [2024-07-25 12:16:34.819581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.637 [2024-07-25 12:16:34.819590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.637 [2024-07-25 12:16:34.819894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.637 [2024-07-25 12:16:34.819935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.637 [2024-07-25 12:16:34.819937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.637 [2024-07-25 12:16:34.825171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.825782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.825807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.825818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.826082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.826347] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.826359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.826369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.830617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.839888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.840432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.840462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.840472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.840745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.841011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.841022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.841032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.845282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.854563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.855070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.855095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.855105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.855369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.855641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.855654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.855664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.859942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.869203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.869748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.869772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.869783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.870046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.870311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.870322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.870332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.874574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.883853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.884295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.884319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.884329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.884593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.884871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.884884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.884894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.889135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.898407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.898934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.898956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.898966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.899230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.899494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.899506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.899515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.637 [2024-07-25 12:16:34.903764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.637 [2024-07-25 12:16:34.913053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.637 [2024-07-25 12:16:34.913646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.637 [2024-07-25 12:16:34.913668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.637 [2024-07-25 12:16:34.913679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.637 [2024-07-25 12:16:34.913942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.637 [2024-07-25 12:16:34.914206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.637 [2024-07-25 12:16:34.914218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.637 [2024-07-25 12:16:34.914227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.638 [2024-07-25 12:16:34.918467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.638 [2024-07-25 12:16:34.927774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.638 [2024-07-25 12:16:34.928266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.638 [2024-07-25 12:16:34.928288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.638 [2024-07-25 12:16:34.928298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.638 [2024-07-25 12:16:34.928561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.638 [2024-07-25 12:16:34.928834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.638 [2024-07-25 12:16:34.928847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.638 [2024-07-25 12:16:34.928857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.638 [2024-07-25 12:16:34.933104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.898 [2024-07-25 12:16:34.942375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.898 [2024-07-25 12:16:34.942832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-25 12:16:34.942853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.898 [2024-07-25 12:16:34.942863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.898 [2024-07-25 12:16:34.943125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.898 [2024-07-25 12:16:34.943389] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.898 [2024-07-25 12:16:34.943400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.898 [2024-07-25 12:16:34.943409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.898 [2024-07-25 12:16:34.947664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.898 [2024-07-25 12:16:34.956974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.898 [2024-07-25 12:16:34.957568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-25 12:16:34.957590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.898 [2024-07-25 12:16:34.957600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.898 [2024-07-25 12:16:34.957871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.898 [2024-07-25 12:16:34.958136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.898 [2024-07-25 12:16:34.958147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.898 [2024-07-25 12:16:34.958157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.898 [2024-07-25 12:16:34.962397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.898 [2024-07-25 12:16:34.971668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.898 [2024-07-25 12:16:34.972206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-25 12:16:34.972228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.898 [2024-07-25 12:16:34.972238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.898 [2024-07-25 12:16:34.972501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.898 [2024-07-25 12:16:34.972772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.898 [2024-07-25 12:16:34.972784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.898 [2024-07-25 12:16:34.972793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.898 [2024-07-25 12:16:34.977037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.898 [2024-07-25 12:16:34.986310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.898 [2024-07-25 12:16:34.986925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-25 12:16:34.986947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.898 [2024-07-25 12:16:34.986961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.898 [2024-07-25 12:16:34.987224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.898 [2024-07-25 12:16:34.987489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.898 [2024-07-25 12:16:34.987500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.898 [2024-07-25 12:16:34.987509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.898 [2024-07-25 12:16:34.991748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.898 [2024-07-25 12:16:35.001023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.898 [2024-07-25 12:16:35.001646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-25 12:16:35.001669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.898 [2024-07-25 12:16:35.001679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.898 [2024-07-25 12:16:35.001944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.898 [2024-07-25 12:16:35.002207] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.898 [2024-07-25 12:16:35.002219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.898 [2024-07-25 12:16:35.002228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.898 [2024-07-25 12:16:35.006471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.898 [2024-07-25 12:16:35.015754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.898 [2024-07-25 12:16:35.016294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-25 12:16:35.016315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.898 [2024-07-25 12:16:35.016325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.898 [2024-07-25 12:16:35.016588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.898 [2024-07-25 12:16:35.016857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.898 [2024-07-25 12:16:35.016869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.898 [2024-07-25 12:16:35.016878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.898 [2024-07-25 12:16:35.021111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.898 [2024-07-25 12:16:35.030379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.899 [2024-07-25 12:16:35.030898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-25 12:16:35.030920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.899 [2024-07-25 12:16:35.030930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.899 [2024-07-25 12:16:35.031192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.899 [2024-07-25 12:16:35.031455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.899 [2024-07-25 12:16:35.031471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.899 [2024-07-25 12:16:35.031480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.899 [2024-07-25 12:16:35.035724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.899 [2024-07-25 12:16:35.045001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.899 [2024-07-25 12:16:35.045588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-25 12:16:35.045616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.899 [2024-07-25 12:16:35.045627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.899 [2024-07-25 12:16:35.045889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.899 [2024-07-25 12:16:35.046153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.899 [2024-07-25 12:16:35.046165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.899 [2024-07-25 12:16:35.046174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.899 [2024-07-25 12:16:35.050415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.899 [2024-07-25 12:16:35.059683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.899 [2024-07-25 12:16:35.060191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-25 12:16:35.060212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.899 [2024-07-25 12:16:35.060222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.899 [2024-07-25 12:16:35.060485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.899 [2024-07-25 12:16:35.060756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.899 [2024-07-25 12:16:35.060768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.899 [2024-07-25 12:16:35.060778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.899 [2024-07-25 12:16:35.065019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.899 [2024-07-25 12:16:35.074280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.899 [2024-07-25 12:16:35.074814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-25 12:16:35.074837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.899 [2024-07-25 12:16:35.074847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.899 [2024-07-25 12:16:35.075111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.899 [2024-07-25 12:16:35.075375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.899 [2024-07-25 12:16:35.075386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.899 [2024-07-25 12:16:35.075396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.899 [2024-07-25 12:16:35.079648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.899 [2024-07-25 12:16:35.088924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.899 [2024-07-25 12:16:35.089495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-25 12:16:35.089517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.899 [2024-07-25 12:16:35.089527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.899 [2024-07-25 12:16:35.089797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.899 [2024-07-25 12:16:35.090061] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.899 [2024-07-25 12:16:35.090073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.899 [2024-07-25 12:16:35.090082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.899 [2024-07-25 12:16:35.094324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.899 [2024-07-25 12:16:35.103592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.899 [2024-07-25 12:16:35.104036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-25 12:16:35.104057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.899 [2024-07-25 12:16:35.104067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.899 [2024-07-25 12:16:35.104330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.899 [2024-07-25 12:16:35.104594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.899 [2024-07-25 12:16:35.104612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.899 [2024-07-25 12:16:35.104622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.899 [2024-07-25 12:16:35.108862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.899 [2024-07-25 12:16:35.118126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.899 [2024-07-25 12:16:35.118716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-25 12:16:35.118737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.899 [2024-07-25 12:16:35.118747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.899 [2024-07-25 12:16:35.119010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.899 [2024-07-25 12:16:35.119274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.899 [2024-07-25 12:16:35.119286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.899 [2024-07-25 12:16:35.119295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.899 [2024-07-25 12:16:35.123532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.899 [2024-07-25 12:16:35.132796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.900 [2024-07-25 12:16:35.133381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-25 12:16:35.133402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.900 [2024-07-25 12:16:35.133412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.900 [2024-07-25 12:16:35.133689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.900 [2024-07-25 12:16:35.133955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.900 [2024-07-25 12:16:35.133966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.900 [2024-07-25 12:16:35.133975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.900 [2024-07-25 12:16:35.138217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.900 [2024-07-25 12:16:35.147466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.900 [2024-07-25 12:16:35.148061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-25 12:16:35.148083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.900 [2024-07-25 12:16:35.148092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.900 [2024-07-25 12:16:35.148355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.900 [2024-07-25 12:16:35.148625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.900 [2024-07-25 12:16:35.148637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.900 [2024-07-25 12:16:35.148646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.900 [2024-07-25 12:16:35.152881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.900 [2024-07-25 12:16:35.162141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.900 [2024-07-25 12:16:35.162698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-25 12:16:35.162720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.900 [2024-07-25 12:16:35.162731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.900 [2024-07-25 12:16:35.162993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.900 [2024-07-25 12:16:35.163257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.900 [2024-07-25 12:16:35.163268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.900 [2024-07-25 12:16:35.163277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.900 [2024-07-25 12:16:35.167520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.900 [2024-07-25 12:16:35.176762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.900 [2024-07-25 12:16:35.177346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-25 12:16:35.177367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.900 [2024-07-25 12:16:35.177377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.900 [2024-07-25 12:16:35.177646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.900 [2024-07-25 12:16:35.177909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.900 [2024-07-25 12:16:35.177921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.900 [2024-07-25 12:16:35.177934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.900 [2024-07-25 12:16:35.182174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.900 [2024-07-25 12:16:35.191443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.900 [2024-07-25 12:16:35.191936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-25 12:16:35.191957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:57.900 [2024-07-25 12:16:35.191967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:57.900 [2024-07-25 12:16:35.192229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:57.900 [2024-07-25 12:16:35.192494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.900 [2024-07-25 12:16:35.192506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.900 [2024-07-25 12:16:35.192515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.900 [2024-07-25 12:16:35.196759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.206024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.206505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.206526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.206536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.206807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.207072] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.207084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.207093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.211341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.220619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.221200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.221222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.221232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.221496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.221766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.221778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.221787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.226023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.235272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.235853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.235879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.235889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.236151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.236415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.236427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.236436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.240687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.249955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.250546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.250567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.250577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.250845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.251110] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.251121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.251131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.255373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.264631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.265097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.265119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.265129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.265392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.265664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.265676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.265685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.269924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.279176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.279739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.279761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.279771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.280033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.280302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.280313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.280323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.284568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.293825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.294410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.294432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.294442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.294712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.294982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.294995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.295004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.299242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.308499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.309040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.309062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.309072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.309336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.309601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.309620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.309630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.313887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.323152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.323655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.323677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.323688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.323952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.324218] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.324231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.324241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.328494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.337761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.338332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.338354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.338364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.338633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.160 [2024-07-25 12:16:35.338900] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.160 [2024-07-25 12:16:35.338912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.160 [2024-07-25 12:16:35.338922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.160 [2024-07-25 12:16:35.343166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.160 [2024-07-25 12:16:35.352425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.160 [2024-07-25 12:16:35.353024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.160 [2024-07-25 12:16:35.353046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.160 [2024-07-25 12:16:35.353057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.160 [2024-07-25 12:16:35.353320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.353586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.353599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.353615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.161 [2024-07-25 12:16:35.357853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.161 [2024-07-25 12:16:35.367104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.161 [2024-07-25 12:16:35.367693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.161 [2024-07-25 12:16:35.367727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.161 [2024-07-25 12:16:35.367737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.161 [2024-07-25 12:16:35.368002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.368268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.368281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.368291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.161 [2024-07-25 12:16:35.372533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.161 [2024-07-25 12:16:35.381798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.161 [2024-07-25 12:16:35.382389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.161 [2024-07-25 12:16:35.382412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.161 [2024-07-25 12:16:35.382427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.161 [2024-07-25 12:16:35.382697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.382964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.382977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.382988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.161 [2024-07-25 12:16:35.387232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.161 [2024-07-25 12:16:35.396494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.161 [2024-07-25 12:16:35.396999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.161 [2024-07-25 12:16:35.397021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.161 [2024-07-25 12:16:35.397031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.161 [2024-07-25 12:16:35.397296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.397562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.397575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.397585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.161 [2024-07-25 12:16:35.401839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.161 [2024-07-25 12:16:35.411107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.161 [2024-07-25 12:16:35.411666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.161 [2024-07-25 12:16:35.411689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.161 [2024-07-25 12:16:35.411700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.161 [2024-07-25 12:16:35.411964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.412230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.412243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.412254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.161 [2024-07-25 12:16:35.416520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.161 [2024-07-25 12:16:35.425765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.161 [2024-07-25 12:16:35.426356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.161 [2024-07-25 12:16:35.426379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.161 [2024-07-25 12:16:35.426390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.161 [2024-07-25 12:16:35.426662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.426929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.426946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.426956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.161 [2024-07-25 12:16:35.431194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.161 [2024-07-25 12:16:35.440445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.161 [2024-07-25 12:16:35.440885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.161 [2024-07-25 12:16:35.440907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.161 [2024-07-25 12:16:35.440918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.161 [2024-07-25 12:16:35.441184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.441450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.441462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.441472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.161 [2024-07-25 12:16:35.445737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.161 [2024-07-25 12:16:35.455006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.161 [2024-07-25 12:16:35.455594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.161 [2024-07-25 12:16:35.455623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.161 [2024-07-25 12:16:35.455633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.161 [2024-07-25 12:16:35.455899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.161 [2024-07-25 12:16:35.456166] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.161 [2024-07-25 12:16:35.456179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.161 [2024-07-25 12:16:35.456189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.420 [2024-07-25 12:16:35.460433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.420 [2024-07-25 12:16:35.469690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.420 [2024-07-25 12:16:35.470177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-25 12:16:35.470200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.420 [2024-07-25 12:16:35.470210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.420 [2024-07-25 12:16:35.470475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.420 [2024-07-25 12:16:35.470746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.420 [2024-07-25 12:16:35.470760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.420 [2024-07-25 12:16:35.470771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.420 [2024-07-25 12:16:35.475010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.420 [2024-07-25 12:16:35.484263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.420 [2024-07-25 12:16:35.484877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-25 12:16:35.484900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.420 [2024-07-25 12:16:35.484911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.420 [2024-07-25 12:16:35.485175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.420 [2024-07-25 12:16:35.485440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.420 [2024-07-25 12:16:35.485453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.420 [2024-07-25 12:16:35.485462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.420 [2024-07-25 12:16:35.489706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.420 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:58.420 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:58.420 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.420 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:58.420 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.420 [2024-07-25 12:16:35.498969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.420 [2024-07-25 12:16:35.499516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.420 [2024-07-25 12:16:35.499542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.420 [2024-07-25 12:16:35.499553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 [2024-07-25 12:16:35.499824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.500090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.500102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.500111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 [2024-07-25 12:16:35.504350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 [2024-07-25 12:16:35.513629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.421 [2024-07-25 12:16:35.514064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-25 12:16:35.514085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.421 [2024-07-25 12:16:35.514096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 [2024-07-25 12:16:35.514360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.514637] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.514651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.514661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 [2024-07-25 12:16:35.518903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 [2024-07-25 12:16:35.528176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.421 [2024-07-25 12:16:35.528741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-25 12:16:35.528764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.421 [2024-07-25 12:16:35.528774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.421 [2024-07-25 12:16:35.529037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.529304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.529316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.529325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.421 [2024-07-25 12:16:35.533570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 [2024-07-25 12:16:35.533637] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.421 [2024-07-25 12:16:35.542840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.421 [2024-07-25 12:16:35.543325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-25 12:16:35.543347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.421 [2024-07-25 12:16:35.543357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 [2024-07-25 12:16:35.543627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.543892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.543903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.543913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 [2024-07-25 12:16:35.548152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.421 [2024-07-25 12:16:35.557404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.421 [2024-07-25 12:16:35.557970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-25 12:16:35.557992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.421 [2024-07-25 12:16:35.558003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 [2024-07-25 12:16:35.558266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.558529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.558545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.558554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 [2024-07-25 12:16:35.562793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 [2024-07-25 12:16:35.572066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.421 [2024-07-25 12:16:35.572664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-25 12:16:35.572688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.421 [2024-07-25 12:16:35.572698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 [2024-07-25 12:16:35.572962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.573227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.573239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.573249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 [2024-07-25 12:16:35.577496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 Malloc0 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.421 [2024-07-25 12:16:35.586770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.421 [2024-07-25 12:16:35.587249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-25 12:16:35.587271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.421 [2024-07-25 12:16:35.587281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 [2024-07-25 12:16:35.587544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.587815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.587827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.587837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.421 [2024-07-25 12:16:35.592077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.421 [2024-07-25 12:16:35.601329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.421 [2024-07-25 12:16:35.601895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.421 [2024-07-25 12:16:35.601916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e90 with addr=10.0.0.2, port=4420 00:29:58.421 [2024-07-25 12:16:35.601926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e90 is same with the state(5) to be set 00:29:58.421 [2024-07-25 12:16:35.602189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5e90 (9): Bad file descriptor 00:29:58.421 [2024-07-25 12:16:35.602320] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.421 [2024-07-25 12:16:35.602454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.421 [2024-07-25 12:16:35.602466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.421 [2024-07-25 12:16:35.602475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.421 [2024-07-25 12:16:35.606716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.421 12:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 112946 00:29:58.422 [2024-07-25 12:16:35.615996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.680 [2024-07-25 12:16:35.736130] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:08.656 00:30:08.656 Latency(us) 00:30:08.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:08.656 Verification LBA range: start 0x0 length 0x4000 00:30:08.656 Nvme1n1 : 15.05 3127.47 12.22 8446.87 0.00 10998.96 949.53 49330.73 00:30:08.656 =================================================================================================================== 00:30:08.656 Total : 3127.47 12.22 8446.87 0.00 10998.96 949.53 49330.73 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.656 rmmod nvme_tcp 00:30:08.656 rmmod nvme_fabrics 00:30:08.656 rmmod nvme_keyring 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 113996 ']' 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 113996 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 113996 ']' 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 113996 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113996 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113996' 00:30:08.656 killing process with pid 113996 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 113996 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 113996 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.656 12:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.631 00:30:09.631 real 0m27.005s 00:30:09.631 user 1m4.388s 00:30:09.631 sys 0m6.518s 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:09.631 ************************************ 00:30:09.631 END TEST nvmf_bdevperf 00:30:09.631 ************************************ 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.631 ************************************ 00:30:09.631 START TEST nvmf_target_disconnect 00:30:09.631 ************************************ 00:30:09.631 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:09.889 * Looking for test storage... 00:30:09.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.889 12:16:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.889 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:09.889 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:09.889 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.889 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.890 12:16:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.460 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:16.461 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:16.461 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:16.461 Found net devices under 0000:af:00.0: cvl_0_0 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:16.461 Found net devices under 0000:af:00.1: cvl_0_1 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:30:16.461 00:30:16.461 --- 10.0.0.2 ping statistics --- 00:30:16.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.461 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:30:16.461 00:30:16.461 --- 10.0.0.1 ping statistics --- 00:30:16.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.461 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:16.461 ************************************ 00:30:16.461 START TEST nvmf_target_disconnect_tc1 00:30:16.461 ************************************ 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.461 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.462 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.462 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:16.462 12:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.462 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.462 [2024-07-25 12:16:53.043664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.462 [2024-07-25 12:16:53.043714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a08cf0 with addr=10.0.0.2, port=4420 00:30:16.462 [2024-07-25 12:16:53.043736] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:16.462 [2024-07-25 12:16:53.043754] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:16.462 [2024-07-25 12:16:53.043762] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:16.462 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:16.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:16.462 Initializing NVMe Controllers 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:16.462 00:30:16.462 real 0m0.167s 00:30:16.462 user 0m0.059s 00:30:16.462 sys 0m0.107s 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.462 ************************************ 00:30:16.462 END TEST nvmf_target_disconnect_tc1 00:30:16.462 ************************************ 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:16.462 ************************************ 00:30:16.462 START TEST nvmf_target_disconnect_tc2 00:30:16.462 ************************************ 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=119443 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 119443 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 119443 ']' 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.462 12:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.462 [2024-07-25 12:16:53.184046] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:30:16.462 [2024-07-25 12:16:53.184099] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.462 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.462 [2024-07-25 12:16:53.302778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.462 [2024-07-25 12:16:53.452482] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.462 [2024-07-25 12:16:53.452552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.462 [2024-07-25 12:16:53.452574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.462 [2024-07-25 12:16:53.452592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.462 [2024-07-25 12:16:53.452617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.462 [2024-07-25 12:16:53.452767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:16.462 [2024-07-25 12:16:53.452879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:16.462 [2024-07-25 12:16:53.452996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:16.462 [2024-07-25 12:16:53.453002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.030 Malloc0 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.030 [2024-07-25 12:16:54.206430] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.030 [2024-07-25 12:16:54.238974] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=119632 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:17.030 12:16:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:17.030 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.591 12:16:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 119443 00:30:19.591 12:16:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Write completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 [2024-07-25 12:16:56.274050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.591 starting I/O failed 00:30:19.591 Read completed with error (sct=0, sc=8) 00:30:19.591 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 [2024-07-25 12:16:56.274356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Read completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 Write completed with error (sct=0, sc=8) 00:30:19.592 starting I/O failed 00:30:19.592 [2024-07-25 12:16:56.274724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.592 [2024-07-25 12:16:56.274946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.274970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.275158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.275178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.275337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.275356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.275671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.275690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.275932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.275951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.276118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.276136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.276393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.276423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.276722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.276754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.277014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.277044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.277224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.277255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.277559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.277590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.277911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.592 [2024-07-25 12:16:56.277941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.592 qpair failed and we were unable to recover it. 00:30:19.592 [2024-07-25 12:16:56.278136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.278166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.278415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.278445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.278760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.278792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.279050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.279080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.279349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.279379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.279568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.279598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.279793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.279829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.280105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.280136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.280412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.280442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.280686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.280718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.280952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.280983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.281230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.281260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.281561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.281580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.281880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.281899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.282251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.282282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.282531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.282561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.282822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.282853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.283092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.283122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.283378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.283397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.283689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.283707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.283979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.284000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.284216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.284235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.284508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.284526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.284801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.284820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.285111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.285129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.285450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.285469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.285805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.285836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.286021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.286051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.286266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.286284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.286497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.286516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.286767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.286786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.286930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.286949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.287175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.287206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.287451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.287481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.287807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.287838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.593 [2024-07-25 12:16:56.288030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.593 [2024-07-25 12:16:56.288048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.593 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.288241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.288259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.288467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.288486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.288775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.288794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.289032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.289050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.289295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.289313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.289615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.289635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.289791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.289809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.290054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.290072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.290217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.290235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.290530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.290548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.290824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.290846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.291074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.291093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.291418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.291437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.291597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.291621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.291835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.291865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.292097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.292126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.292407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.292437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.292729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.292761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.293006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.293025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.293240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.293259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.293543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.293562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.293778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.293797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.294011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.294029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.294326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.294357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.294582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.294636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.294997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.295027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.295329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.295359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.295599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.295638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.295812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.295842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.296085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.296115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.296436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.296454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.296745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.296763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.297039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.297058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.297207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.297225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.297502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.297520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.297664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.594 [2024-07-25 12:16:56.297683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.594 qpair failed and we were unable to recover it. 00:30:19.594 [2024-07-25 12:16:56.297882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.297900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.298113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.298132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.298403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.298421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.298628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.298647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.298864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.298883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.299090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.299121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.299449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.299479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.299844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.299875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.300166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.300196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.300447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.300477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.300670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.300701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.301015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.301400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.301430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.301592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.301630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.301865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.301900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.302084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.302114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.302455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.302485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.302724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.302743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.302946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.302965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.303281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.303299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.303619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.303651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.303969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.304016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.304227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.304245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.304560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.304578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.304767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.304785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.304997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.305016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.305307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.305325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.305608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.305627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.305887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.305917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.306160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.306190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.306500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.306530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.306791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.306822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.307088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.307119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.595 [2024-07-25 12:16:56.307505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.595 [2024-07-25 12:16:56.307535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.595 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.307772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.307804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.308119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.308149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.308425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.308455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.308781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.308812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.309114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.309145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.309515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.309545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.309821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.309853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.310177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.310207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.310523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.310552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.310886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.310927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.311218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.311236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.311554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.311572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.311931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.311962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.312203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.312234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.312619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.312651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.312885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.312915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.313343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.313372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.313695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.313726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.313985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.314004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.314218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.314237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.314545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.314566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.314920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.314939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.315097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.315115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.315355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.315374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.315586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.315615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.315831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.315849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.316053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.596 [2024-07-25 12:16:56.316072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.596 qpair failed and we were unable to recover it. 00:30:19.596 [2024-07-25 12:16:56.316398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.316429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.316721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.316752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.316994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.317025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.317215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.317233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.317531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.317561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.317825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.317856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.318108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.318138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.318445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.318464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.318786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.318817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.319060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.319090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.319438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.319469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.319741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.319772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.320035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.320065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.320405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.320435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.320788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.320820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.321015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.321046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.321291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.321321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.321570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.321600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.321930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.321961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.322231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.322262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.322561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.322591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.322779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.322810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.323062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.323093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.323406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.323424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.323630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.323649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.323861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.323879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.324087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.324106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.324322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.324341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.324647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.324679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.324955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.324986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.325281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.325311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.325570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.325600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.325798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.325829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.326004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.326026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.326199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.597 [2024-07-25 12:16:56.326229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.597 qpair failed and we were unable to recover it. 00:30:19.597 [2024-07-25 12:16:56.326461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.326492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.326729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.326761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.326927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.326957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.327146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.327176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.327423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.327442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.327688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.327707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.327838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.327856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.328073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.328104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.328295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.328326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.328623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.328655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.328949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.328980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.329152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.329181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.329467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.329486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.329654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.329672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.329851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.329881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.330103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.330133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.330550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.330581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.330816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.330846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.331114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.331144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.331496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.331526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.331754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.331786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.332025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.332056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.332406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.332424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.332695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.332726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.332971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.333002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.333208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.333239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.333461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.333491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.333785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.333817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.334008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.334042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.334235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.334253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.334505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.334536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.334718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.334750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.334933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.334964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.335126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.335156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.335393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.335424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.335676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.598 [2024-07-25 12:16:56.335695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.598 qpair failed and we were unable to recover it. 00:30:19.598 [2024-07-25 12:16:56.335910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.335929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.336090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.336108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.336359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.336396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.336653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.336683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.336922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.336953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.337128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.337158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.337470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.337488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.337668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.337688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.337934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.337952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.338132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.338163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.338386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.338417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.338736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.338768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.339090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.339121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.339426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.339457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.339750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.339783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.340025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.340056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.340484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.340516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.340807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.340839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.341130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.341160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.341512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.341542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.341724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.341754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.342002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.342033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.342350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.342380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.342614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.342634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.342864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.342883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.343106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.343137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.343461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.343491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.343813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.343845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.344096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.344127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.344549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.344631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.344958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.344992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.345403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.345434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.345787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.345819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.346094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.346124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.346466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.346496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.346864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.346886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.347160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.599 [2024-07-25 12:16:56.347178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.599 qpair failed and we were unable to recover it. 00:30:19.599 [2024-07-25 12:16:56.347428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.347458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.347695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.347726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.347974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.348005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.348206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.348225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.348542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.348560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.348855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.348877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.349152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.349170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.349541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.349572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.349790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.349821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.350144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.350175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.350471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.350501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.350737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.350769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.351013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.351044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.351378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.351409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.351703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.351734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.351968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.351999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.352245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.352275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.352512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.352530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.352731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.352751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.353049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.353068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.353283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.353301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.353600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.353626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.353839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.353858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.354195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.354225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.354411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.354441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.354815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.354847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.355093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.355123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.355390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.355420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.355742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.355761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.356058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.356076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.356403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.356421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.356746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.356766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.356981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.357000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.357301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.357320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.357649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.357668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.357938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.357956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.600 [2024-07-25 12:16:56.358125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.600 [2024-07-25 12:16:56.358143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.600 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.358411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.358430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.358749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.358768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.359079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.359098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.359315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.359333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.359496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.359515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.359660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.359680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.359895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.359913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.360177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.360195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.360508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.360530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.360756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.360776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.360996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.361014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.361294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.361312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.361578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.361597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.361789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.361808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.362031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.362061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.362304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.362335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.362630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.362662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.362838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.362869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.363141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.363171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.363494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.363512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.363823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.363843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.364115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.364133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.364503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.364533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.364858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.364890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.365122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.365152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.365402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.365432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.365754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.365786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.366096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.366126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.366456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.366486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.366683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.366715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.366954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.366986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.601 [2024-07-25 12:16:56.367299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.601 [2024-07-25 12:16:56.367329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.601 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.367574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.367593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.367867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.367886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.368182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.368201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.368545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.368566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.368852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.368884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.369213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.369244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.369589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.369632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.369899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.369929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.370197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.370228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.370526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.370556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.370883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.370915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.371215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.371245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.371569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.371588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.371872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.371891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.372113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.372131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.372373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.372391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.372686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.372707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.373041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.373060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.373318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.373348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.373725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.373757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.374029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.374059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.374405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.374435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.374756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.374788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.375098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.375129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.375441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.375460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.375775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.375796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.376022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.376041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.376358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.376376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.376622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.376642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.376855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.376874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.377104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.377122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.377438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.377469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.377822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.377855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.378090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.378120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.378430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.378449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.602 [2024-07-25 12:16:56.378721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.602 [2024-07-25 12:16:56.378741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.602 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.378953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.378972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.379144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.379163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.379415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.379445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.379788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.379820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.380061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.380092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.380345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.380375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.380677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.380708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.381071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.381107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.381292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.381323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.381614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.381647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.381842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.381861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.382087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.382106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.382272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.382290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.382597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.382638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.382871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.382902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.383226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.383257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.383590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.383631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.383959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.383980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.384238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.384257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.384419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.384437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.384817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.384849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.385125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.385155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.385527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.385558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.385810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.385842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.386169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.386200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.386461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.386492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.386673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.386704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.387031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.387062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.387312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.387330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.387613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.387633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.387920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.387939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.388315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.388346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.388680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.388712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.603 [2024-07-25 12:16:56.389029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.603 [2024-07-25 12:16:56.389059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.603 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.389350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.389381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.389709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.389729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.389894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.389912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.390089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.390109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.390393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.390424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.390693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.390724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.391023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.391054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.391412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.391443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.391801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.391833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.392103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.392133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.392410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.392441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.392850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.392882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.393123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.393154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.393472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.393508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.393773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.393805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.394043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.394074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.394352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.394371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.394716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.394759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.395063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.395093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.395397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.395428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.395753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.395785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.396090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.396121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.396456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.396487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.396820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.396851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.397048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.397080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.397334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.397364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.397689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.397709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.397937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.397956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.398094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.398113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.398248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.398267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.398564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.398583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.398822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.398842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.399135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.399154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.399372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.399391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.399611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.399631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.604 [2024-07-25 12:16:56.399875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.604 [2024-07-25 12:16:56.399906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.604 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.400137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.400168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.400426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.400457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.400713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.400745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.400989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.401019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.401307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.401338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.401568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.401587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.401766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.401785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.401957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.401987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.402227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.402258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.402484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.402514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.402768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.402788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.403058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.403077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.403368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.403387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.403671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.403691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.403944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.403962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.404206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.404225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.404599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.404644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.404841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.404881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.405262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.405295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.405581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.405600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.405899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.405935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.406189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.406220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.406463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.406494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.406662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.406682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.406920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.406951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.407206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.407237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.407596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.407639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.407995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.408025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.408339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.408358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.408694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.408714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.408941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.408961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.409251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.409270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.409573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.409626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.409878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.409909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.410091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.410122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.410479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.410510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.605 [2024-07-25 12:16:56.410677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.605 [2024-07-25 12:16:56.410697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.605 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.410927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.410958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.411222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.411253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.411498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.411528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.411780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.411813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.412067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.412098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.412453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.412485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.412764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.412783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.413043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.413062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.413216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.413234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.413556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.413587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.413906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.413939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.414141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.414172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.414527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.414558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.414802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.414834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.415163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.415194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.415556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.415587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.415907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.415938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.416115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.416147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.416485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.416515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.416869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.416901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.417187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.417223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.417558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.417589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.417845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.417865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.418090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.418109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.418371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.418390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.418772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.418791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.419100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.419131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.419440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.419471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.419669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.419701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.419953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.419972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.420142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.606 [2024-07-25 12:16:56.420161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.606 qpair failed and we were unable to recover it. 00:30:19.606 [2024-07-25 12:16:56.420398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.420417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.420659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.420679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.420958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.420977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.421251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.421270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.421589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.421630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.421799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.421830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.422135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.422166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.422539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.422570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.422936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.422969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.423229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.423260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.423628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.423660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.423942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.423974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.424170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.424201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.424539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.424570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.424846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.424878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.425139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.425170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.425481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.425523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.425814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.425834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.426009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.426028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.426344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.426364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.426583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.426609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.426943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.426962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.427172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.427191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.427364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.427383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.427611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.427631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.427839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.427858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.428083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.428102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.428438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.428458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.428803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.428845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.429104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.429140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.429494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.429525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.429858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.429890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.430221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.430251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.430601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.430651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.430911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.430943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.431212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.431243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.607 [2024-07-25 12:16:56.431569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.607 [2024-07-25 12:16:56.431600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.607 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.431948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.431979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.432221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.432251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.432563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.432595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.432865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.432897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.433244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.433275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.433520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.433551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.433834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.433854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.434032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.434053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.434365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.434385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.434689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.434710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.434953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.434973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.435210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.435230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.435444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.435463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.435801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.435835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.436026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.436057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.436411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.436442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.436745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.436765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.437040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.437060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.437209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.437228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.437542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.437562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.437732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.437752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.438066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.438085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.438360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.438380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.438593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.438623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.438875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.438895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.439211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.439230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.439435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.439454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.439789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.439834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.440165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.440195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.440545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.440576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.440918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.440949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.441199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.441231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.441546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.441569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.441804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.441837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.442026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.442056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.442345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.608 [2024-07-25 12:16:56.442375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.608 qpair failed and we were unable to recover it. 00:30:19.608 [2024-07-25 12:16:56.442726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.442758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.443009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.443040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.443362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.443393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.443704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.443724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.443932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.443951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.444302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.444322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.444619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.444638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.444864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.444883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.445049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.445068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.445389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.445421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.445741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.445773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.446084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.446115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.446315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.446345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.446678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.446710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.446873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.446904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.447176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.447210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.447541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.447560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.447898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.447918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.448255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.448286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.448532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.448551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.448786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.448806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.449101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.449120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.449299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.449320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.449541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.449573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.449910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.449943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.450197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.450227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.450465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.450496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.450839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.450877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.451082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.451113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.451425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.451456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.451753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.451774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.452025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.452045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.452417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.452448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.452722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.452754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.452946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.452977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.453210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.453240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.453511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.609 [2024-07-25 12:16:56.453533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.609 qpair failed and we were unable to recover it. 00:30:19.609 [2024-07-25 12:16:56.453824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.453844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.454065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.454084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.454306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.454325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.454684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.454704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.454937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.454956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.455295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.455328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.455576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.455618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.455988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.456020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.456392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.456423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.456687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.456708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.456945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.456964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.457219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.457238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.457395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.457414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.457666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.457699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.457943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.457974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.458342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.458385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.458638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.458658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.458834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.458852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.459109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.459140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.459481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.459512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.459801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.459821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.460002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.460022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.460313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.460332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.460557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.460576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.460875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.460895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.461124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.461143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.461516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.461547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.461848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.461879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.462138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.462169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.462427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.462459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.462752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.462785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.463153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.463172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.463470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.463500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.463833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.463866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.464041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.464071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.464415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.464447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.610 [2024-07-25 12:16:56.464686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.610 [2024-07-25 12:16:56.464718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.610 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.464963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.464994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.465276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.465306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.465493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.465533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.465779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.465810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.466141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.466160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.466465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.466484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.466733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.466765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.467047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.467079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.467441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.467472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.467791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.467822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.468078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.468109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.468364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.468395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.468669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.468689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.468921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.468941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.469113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.469132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.469278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.469323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.469664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.469696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.470054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.470084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.470351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.470382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.470724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.470744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.470969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.470988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.471142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.471161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.471541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.471560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.471840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.471861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.472089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.472107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.472442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.472462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.472633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.472653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.472885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.611 [2024-07-25 12:16:56.472904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.611 qpair failed and we were unable to recover it. 00:30:19.611 [2024-07-25 12:16:56.473067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.473087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.473425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.473457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.473715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.473747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.473990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.474020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.474214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.474245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.474598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.474643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.474844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.474874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.475214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.475246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.475522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.475553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.475833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.475865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.476119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.476149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.476516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.476547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.476901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.476933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.477185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.477215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.477469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.477513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.477743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.477763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.478047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.478067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.478389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.478408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.478739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.478759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.478965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.478983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.479215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.479234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.479467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.479486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.479779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.479800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.480041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.480059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.480289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.480308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.480589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.480616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.480874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.480893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.481176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.481195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.481507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.481526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.481767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.481788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.482012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.482031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.482200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.482218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.482555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.482586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.482843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.482874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.483074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.483104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.483363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.612 [2024-07-25 12:16:56.483393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.612 qpair failed and we were unable to recover it. 00:30:19.612 [2024-07-25 12:16:56.483700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.483733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.484045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.484077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.484342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.484374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.484645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.484677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.484877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.484908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.485248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.485279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.485534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.485565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.485739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.485771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.486047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.486077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.486414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.486445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.486725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.486757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.487021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.487052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.487386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.487427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.487681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.487700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.487875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.487894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.488206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.488238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.488624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.488656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.488857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.488888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.489224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.489261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.489595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.489636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.489917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.489936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.490212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.490231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.490454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.490473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.490795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.490815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.491049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.491069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.491311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.491329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.491635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.491654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.491883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.491902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.492153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.492172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.492505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.492525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.492787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.492807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.493027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.493058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.493504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.493535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.493770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.493789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.493967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.613 [2024-07-25 12:16:56.493986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.613 qpair failed and we were unable to recover it. 00:30:19.613 [2024-07-25 12:16:56.494199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.494230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.494543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.494574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.494794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.494828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.495089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.495120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.495467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.495499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.495840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.495872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.496157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.496188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.496568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.496599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.496865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.496897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.497207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.497238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.497477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.497509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.497897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.497929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.498191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.498222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.498493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.498524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.498818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.498851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.499107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.499138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.499539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.499571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.499916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.499948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.500192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.500223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.500613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.500645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.500922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.500953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.501263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.501294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.501544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.501575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.501866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.501903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.502103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.502135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.502419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.502450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.502759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.502792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.503141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.503173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.503423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.503455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.503815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.503847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.504052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.504083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.504427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.504458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.504788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.504820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.505060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.505090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.505435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.505467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.505679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.505712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.505920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.505939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.614 [2024-07-25 12:16:56.506117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.614 [2024-07-25 12:16:56.506137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.614 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.506468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.506487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.506743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.506763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.507049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.507068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.507430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.507461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.507799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.507831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.508084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.508116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.508513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.508545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.508823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.508856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.509103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.509135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.509403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.509434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.509707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.509728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.510008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.510027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.510261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.510281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.510499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.510518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.510826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.510847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.511077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.511096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.511457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.511488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.511743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.511775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.511976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.512008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.512384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.512415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.512654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.512687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.512885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.512916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.513225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.513256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.513613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.513645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.513896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.513916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.514169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.514191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.514497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.514516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.514826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.514858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.515190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.515221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.515534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.515565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.515914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.515947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.516253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.516284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.516613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.615 [2024-07-25 12:16:56.516646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.615 qpair failed and we were unable to recover it. 00:30:19.615 [2024-07-25 12:16:56.516995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.517025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.517276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.517307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.517637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.517670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.518033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.518063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.518342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.518373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.518682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.518716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.518982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.519013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.519297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.519327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.519634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.519666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.519872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.519891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.520141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.520161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.520537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.520567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.520780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.520812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.521064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.521094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.521458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.521490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.521786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.521818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.522145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.522177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.522545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.522576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.522798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.522830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.523123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.523155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.523522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.523553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.523768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.523801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.524002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.524033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.524209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.524240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.524519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.524542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.524822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.524840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.525071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.525090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.525321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.525340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.525650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.525670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.525951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.525971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.526132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.526151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.526499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.526518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.526801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.526821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.616 qpair failed and we were unable to recover it. 00:30:19.616 [2024-07-25 12:16:56.527139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.616 [2024-07-25 12:16:56.527158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.527404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.527423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.527572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.527591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.527871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.527891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.528113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.528132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.528441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.528460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.528810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.528831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.529062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.529081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.529348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.529367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.529670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.529690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.529998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.530017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.530326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.530346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.530677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.530697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.530862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.530881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.531177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.531197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.531512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.531531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.531819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.531839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.532061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.532080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.532401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.532420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.532624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.532643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.532925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.532944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.533221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.533241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.533520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.533539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.533794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.533814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.534036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.534055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.534265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.534286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.534511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.534534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.534753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.534773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.535004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.535024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.535322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.535341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.535649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.535669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.536006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.536025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.536344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.536362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.536677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.536697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.537003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.537022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.537356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.537375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.537695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.617 [2024-07-25 12:16:56.537715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.617 qpair failed and we were unable to recover it. 00:30:19.617 [2024-07-25 12:16:56.537946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.537965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.538223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.538242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.538542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.538561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.538869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.538890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.539123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.539141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.539447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.539466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.539797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.539817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.540081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.540101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.540411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.540430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.540709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.540729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.540950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.540969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.541253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.541272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.541555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.541574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.541840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.541861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.542090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.542109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.542387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.542406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.542770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.542790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.543096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.543116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.543450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.543469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.543784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.543804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.544084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.544103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.544463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.544481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.544797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.544817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.545150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.545169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.545484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.545503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.545799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.545819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.546155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.546174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.546488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.546508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.546666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.546686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.546941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.546963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.547183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.547202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.547445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.547464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.547670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.547690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.548039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.548058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.548360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.548379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.548684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.548703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.618 qpair failed and we were unable to recover it. 00:30:19.618 [2024-07-25 12:16:56.549043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.618 [2024-07-25 12:16:56.549062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.549386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.549406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.549688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.549707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.550017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.550036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.550367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.550386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.550700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.550720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.550985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.551003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.551248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.551268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.551548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.551566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.551935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.551955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.552245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.552264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.552488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.552508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.552747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.552767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.552991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.553010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.553245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.553264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.553501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.553520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.553743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.553762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.553969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.553988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.554270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.554289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.554508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.554527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.554819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.554839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.555140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.555158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.555384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.555403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.555548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.555567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.555804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.555825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.556031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.556051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.556299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.556318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.556631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.556651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.556985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.557004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.557351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.557370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.557581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.557600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.557844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.557863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.558174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.558193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.558523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.558546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.558864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.558884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.559212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.559231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.559543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.619 [2024-07-25 12:16:56.559562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.619 qpair failed and we were unable to recover it. 00:30:19.619 [2024-07-25 12:16:56.559807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.559826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.560052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.560071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.560401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.560420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.560729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.560749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.561026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.561044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.561195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.561213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.561442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.561461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.561799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.561819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.562041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.562060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.562268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.562287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.562529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.562547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.562832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.562852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.563107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.563126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.563349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.563368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.563598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.563633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.563848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.563867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.564173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.564192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.564473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.564492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.564754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.564774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.565083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.565102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.565341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.565359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.565617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.565636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.565868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.565887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.566197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.566216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.566470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.566489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.566823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.566843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.567132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.567151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.567433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.567452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.567811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.567831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.568125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.568144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.568450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.568469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.620 [2024-07-25 12:16:56.568802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.620 [2024-07-25 12:16:56.568822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.620 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.569140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.569158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.569478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.569496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.569813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.569833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.570061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.570080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.570392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.570414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.570749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.570769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.570997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.571016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.571302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.571322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.571635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.571654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.571989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.572008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.572326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.572345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.572666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.572685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.572946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.572965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.573275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.573294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.573608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.573628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.573909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.573927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.574243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.574262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.574512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.574530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.574782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.574802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.575093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.575112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.575406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.575425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.575768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.575787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.576100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.576120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.576461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.576481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.576716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.576736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.577017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.577040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.577269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.577288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.577529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.577548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.577862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.577882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.578217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.578236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.578392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.578411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.578635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.578654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.578985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.579004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.579241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.579260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.579499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.579518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.579834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.579853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.621 [2024-07-25 12:16:56.580188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.621 [2024-07-25 12:16:56.580208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.621 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.580529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.580548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.580865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.580885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.581216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.581234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.581490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.581510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.581824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.581844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.582068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.582087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.582395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.582414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.582747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.582770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.583001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.583020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.583253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.583272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.583553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.583571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.583928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.583948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.584267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.584288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.584621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.584641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.584956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.584974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.585307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.585326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.585647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.585667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.585987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.586006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.586339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.586358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.586584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.586609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.586834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.586853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.587018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.587038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.587343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.587362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.587666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.587685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.588019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.588038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.588318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.588337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.588573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.588592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.588907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.588927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.589174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.589193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.589491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.589509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.589794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.589814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.590118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.590138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.590476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.590495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.590781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.590801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.591108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.591127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.591414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.591433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.622 [2024-07-25 12:16:56.591669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.622 [2024-07-25 12:16:56.591689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.622 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.591983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.592003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.592302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.592322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.592665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.592685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.592903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.592922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.593217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.593237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.593461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.593480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.593809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.594152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.594171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.594386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.594406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.594686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.594706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.595000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.595022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.595360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.595379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.595628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.595648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.595805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.595824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.596133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.596153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.596434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.596453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.596689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.596709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.597006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.597025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.597243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.597262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.597467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.597486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.597789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.597809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.598035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.598054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.598357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.598376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.598628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.598648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.598910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.598929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.599212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.599232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.599523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.599542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.599824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.599844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.600054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.600073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.600308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.600327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.600612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.600631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.600935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.600953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.601180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.601200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.601409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.601427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.601731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.601751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.602084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.623 [2024-07-25 12:16:56.602103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-25 12:16:56.602444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.602463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.602673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.602693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.602901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.602920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.603232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.603251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.603584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.603631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.603959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.603978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.604209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.604228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.604473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.604492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.604719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.604739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.604960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.604979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.605214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.605233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.605554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.605573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.605808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.605828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.606134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.606152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.606454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.606477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.606731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.606751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.606985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.607005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.607212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.607231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.607552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.607571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.607890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.607910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.608243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.608262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.608582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.608601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.608745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.608764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.609097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.609116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.609340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.609359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.609571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.609590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.609821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.609840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.610142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.610161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.610297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.610317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.610480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.610499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.610815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.610835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.611078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.611097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.611417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.611436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.611768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.612066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.612085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.612366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.612384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.612518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.624 [2024-07-25 12:16:56.612537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-25 12:16:56.612823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.612842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.613181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.613200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.613507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.613526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.613862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.613882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.614199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.614218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.614537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.614557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.614793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.614813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.615118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.615137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.615482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.615501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.615733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.615752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.615958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.615977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.616266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.616285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.616578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.616596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.616941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.616960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.617270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.617288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.617501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.617520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.617814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.617833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.618044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.618065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.618359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.618378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.618596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.618623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.618875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.618894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.619170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.619189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.619488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.619507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.619843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.619863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.620084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.620103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.620413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.620432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.620769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.620788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.621102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.621122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.621377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.625 [2024-07-25 12:16:56.621396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-25 12:16:56.621628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.621648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.621855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.621874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.622107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.622126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.622436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.622455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.622792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.622811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.623053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.623072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.623425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.623444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.623725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.623745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.624041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.624060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.624345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.624364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.624687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.624707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.625022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.625052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.625381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.625411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.625571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.625610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.625892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.625923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.626185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.626217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.626467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.626498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.626825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.626845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.627156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.627175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.627437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.627477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.627810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.627843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.628175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.628206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.628494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.628526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.628898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.628930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.629264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.629295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.629622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.629654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.629941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.629972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.630339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.630369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.630621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.630658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.630911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.630942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.631200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.631219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.631534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.631553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.631905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.631937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.632276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.632307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.632587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.632644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.632977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.633008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.626 [2024-07-25 12:16:56.633207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.626 [2024-07-25 12:16:56.633238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.626 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.633420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.633450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.633686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.633719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.634078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.634110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.634477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.634512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.634794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.634826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.635095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.635126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.635484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.635516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.635849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.635882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.636164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.636195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.636537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.636568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.636897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.636929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.637235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.637254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.637562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.637581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.637906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.637926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.638094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.638112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.638317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.638348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.638690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.638722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.639001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.639031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.639349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.639381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.639735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.639767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.639997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.640028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.640351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.640370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.640651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.640670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.640998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.641029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.641345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.641375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.641711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.641744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.642072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.642091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.642398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.642417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.642551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.642569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.642885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.642904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.643215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.643234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.643562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.643585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.643843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.643881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.644241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.644272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.644599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.644650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.644902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.644933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.627 qpair failed and we were unable to recover it. 00:30:19.627 [2024-07-25 12:16:56.645271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.627 [2024-07-25 12:16:56.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.645514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.645533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.645814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.645834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.646055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.646074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.646363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.646382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.646692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.646712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.647053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.647084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.647428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.647460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.647783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.647825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.648057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.648076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.648358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.648378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.648687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.648706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.649050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.649080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.649365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.649396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.649732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.649764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.650076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.650106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.650356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.650387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.650659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.650690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.651024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.651054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.651309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.651340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.651694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.651726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.651996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.652027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.652282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.652300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.652613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.652632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.652971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.653001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.653330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.653361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.653695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.653727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.653958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.653989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.654317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.654336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.654647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.654666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.654946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.654964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.655252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.655271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.655578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.655596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.655837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.655856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.656161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.656180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.656402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.656424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.628 [2024-07-25 12:16:56.656732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.628 [2024-07-25 12:16:56.656752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.628 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.657086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.657121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.657460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.657491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.657821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.657865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.658084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.658103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.658399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.658418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.658755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.658775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.659089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.659119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.659299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.659329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.659580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.659619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.659891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.659923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.660246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.660277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.660582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.660622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.660938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.660969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.661286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.661317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.661637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.661669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.662018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.662049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.662374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.662393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.662707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.662727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.662959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.662978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.663256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.663297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.663525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.663556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.663810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.663842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.664084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.664104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.664396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.664414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.664640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.664660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.664994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.665039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.665352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.665382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.665641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.665673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.666010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.666042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.666299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.666329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.666629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.666649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.666934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.666964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.667214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.667246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.667566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.667597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.667948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.667979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.668285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.668316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.629 [2024-07-25 12:16:56.668636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.629 [2024-07-25 12:16:56.668668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.629 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.668958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.668988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.669321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.669371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.669626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.669657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.669903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.669933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.670269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.670300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.670551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.670583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.670875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.670906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.671239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.671270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.671595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.671638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.671924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.671943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.672221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.672240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.672529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.672566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.672924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.672956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.673210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.673241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.673599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.673642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.673982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.674013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.674338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.674369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.674677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.674709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.675049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.675080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.675366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.675397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.675729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.675760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.676097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.676128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.676434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.676465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.676792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.676824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.677052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.677071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.677289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.677308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.677531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.677550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.677838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.677858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.678206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.678238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.678569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.678600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.678935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.630 [2024-07-25 12:16:56.678965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.630 qpair failed and we were unable to recover it. 00:30:19.630 [2024-07-25 12:16:56.679288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.679307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.679534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.679552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.679835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.679855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.680161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.680198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.680528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.680560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.680807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.680839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.681145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.681176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.681478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.681508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.681822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.681854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.682163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.682182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.682407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.682426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.682711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.682730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.683067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.683086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.683336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.683367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.683590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.683632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.683939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.683969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.684343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.684373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.684637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.684671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.685016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.685047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.685295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.685325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.685686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.685717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.686090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.686121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.686451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.686481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.686726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.686758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.687099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.687131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.687464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.687495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.687800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.687857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.688207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.688238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.688544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.688575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.688895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.688926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.689239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.689269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.689511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.689542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.689915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.689947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.690285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.690315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.690548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.690579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.690925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.690956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.631 [2024-07-25 12:16:56.691202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.631 [2024-07-25 12:16:56.691234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.631 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.691468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.691490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.691794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.691814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.692133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.692151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.692486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.692505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.692825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.692845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.693152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.693171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.693375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.693394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.693728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.693747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.694029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.694048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.694350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.694369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.694692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.694712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.695024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.695055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.695386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.695416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.695746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.695778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.696122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.696153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.696449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.696468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.696775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.696794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.697015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.697034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.697346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.697365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.697648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.697668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.697950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.697969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.698259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.698290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.698594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.698634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.698892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.698924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.699174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.699215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.699454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.699473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.699783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.699815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.700169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.700201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.700533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.700551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.700865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.700897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.701226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.701257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.701591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.701632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.701968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.701999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.702331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.702362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.702692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.702723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.703055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.703086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.703417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.703447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.632 [2024-07-25 12:16:56.703776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.632 [2024-07-25 12:16:56.703808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.632 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.704141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.704160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.704462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.704481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.704813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.704836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.705089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.705119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.705310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.705340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.705597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.705654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.705858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.705889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.706122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.706153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.706454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.706473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.706728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.706748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.707032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.707051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.707365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.707384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.707617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.707637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.707942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.707961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.708330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.708349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.708572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.708626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.708817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.708849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.709184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.709214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.709544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.709575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.709919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.709952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.710256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.710287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.710623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.710655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.710908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.710939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.711309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.711339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.711688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.711720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.712049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.712081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.712413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.712773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.712804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.713133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.713164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.713500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.713520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.713826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.713846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.713978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.713999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.714281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.714312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.714660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.714692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.714994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.715025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.715347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.715378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.715713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.715746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.716055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.716087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.716409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.716440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.716756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.633 [2024-07-25 12:16:56.716788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.633 qpair failed and we were unable to recover it. 00:30:19.633 [2024-07-25 12:16:56.717039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.717070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.717301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.717332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.717569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.717638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.717978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.718009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.718257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.718288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.718537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.718568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.718881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.719149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.719168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.719383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.719402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.719620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.719640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.719956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.719975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.720311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.720353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.720688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.720720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.721050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.721080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.721387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.721418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.721686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.721718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.722058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.722089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.722419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.722438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.722711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.722742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.722993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.723024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.723393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.723424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.723694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.723726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.724035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.724066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.724385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.724416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.724724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.724756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.724996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.725027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.725276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.725295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.725551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.725571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.725890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.725909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.726120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.726139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.726418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.726437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.726739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.726760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.727118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.727149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.634 [2024-07-25 12:16:56.727433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.634 [2024-07-25 12:16:56.727464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.634 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.727783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.727816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.728142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.728173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.728520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.728551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.728873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.728905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.729211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.729230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.729547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.729566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.729888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.729920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.730240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.730271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.730583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.730628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.730964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.730995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.731337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.731368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.731614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.731646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.731954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.731998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.732294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.732312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.732533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.732552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.732786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.732805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.733111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.733130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.733457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.733494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.733846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.733878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.734185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.734216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.734550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.734585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.734906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.734938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.735277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.735307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.735640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.735672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.736004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.736035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.736342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.736373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.736700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.736721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.736956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.736975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.737278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.737297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.737640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.737680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.738046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.738077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.738408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.738439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.738777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.738808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.739134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.739153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.739381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.739400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.739711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.739731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.739980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.739999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.740297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.740316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.635 [2024-07-25 12:16:56.740648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.635 [2024-07-25 12:16:56.740687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.635 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.741027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.741057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.741385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.741416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.741720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.741751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.742076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.742119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.742362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.742382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.742680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.742699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.743010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.743029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.743329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.743359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.743611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.743644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.743975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.744023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.744275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.744294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.744511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.744530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.744756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.744776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.744942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.744961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.745245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.745276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.745522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.745553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.745931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.745963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.746297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.746327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.746576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.746615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.746950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.746980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.747247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.747278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.747639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.747670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.748000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.748040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.748382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.748426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.748760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.748791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.749125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.749156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.749432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.749463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.749797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.750163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.750194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.750501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.750532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.750835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.750867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.751199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.751229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.751534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.751565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.751803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.751834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.752145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.752176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.752501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.752520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.752834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.752853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.636 [2024-07-25 12:16:56.753105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.636 [2024-07-25 12:16:56.753136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.636 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.753466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.753497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.753828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.753860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.754183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.754202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.754449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.754480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.754803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.754834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.755138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.755168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.755390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.755409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.755573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.755592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.755834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.755866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.756173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.756204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.756541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.756846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.756870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.757212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.757231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.757579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.757620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.757954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.757985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.758313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.758343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.758657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.758690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.759041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.759071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.759377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.759408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.759662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.759694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.760045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.760076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.760303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.760334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.760639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.760658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.760940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.760959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.761268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.761298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.761530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.761561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.761905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.761937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.762273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.762303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.762546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.762565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.762905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.762924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.763152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.763192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.763548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.763579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.763941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.763972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.764148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.764179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.764562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.764592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.764924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.764956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.765288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.765320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.765617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.765650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.765975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.766007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.766234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.637 [2024-07-25 12:16:56.766264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.637 qpair failed and we were unable to recover it. 00:30:19.637 [2024-07-25 12:16:56.766519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.766549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.766801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.766833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.767189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.767220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.767488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.767519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.767881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.767913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.768216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.768247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.768571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.768617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.768897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.768928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.769156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.769187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.769416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.769447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.769781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.769813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.770144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.770181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.770488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.770519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.770840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.770871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.771177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.771208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.771536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.771566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.771864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.771896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.772225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.772255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.772588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.772635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.772969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.773000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.773332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.773363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.773651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.773671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.773998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.774017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.774225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.774244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.774586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.774610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.774782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.774801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.775109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.775128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.775364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.775383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.775693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.775713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.776044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.776063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.776287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.776305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.776617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.776637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.776944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.776963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.777267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.777285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.777524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.777543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.777850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.777870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.778202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.778221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.778448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.778467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.778755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.778775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.779071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.779090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.779437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.779456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.779764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.779784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.638 [2024-07-25 12:16:56.780013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.638 [2024-07-25 12:16:56.780032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.638 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.780311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.780329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.780553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.780572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.780863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.780882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.781057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.781075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.781298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.781317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.781623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.781643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.781974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.781993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.782317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.782336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.782592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.782624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.782788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.782806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.783102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.783120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.783348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.783367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.783707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.783726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.784033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.784052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.784355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.784374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.784709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.784731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.785045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.785064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.785299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.785318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.785601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.785628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.785910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.785929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.786209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.786228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.786537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.786556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.786889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.786909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.787227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.787246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.787580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.787599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.787892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.787911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.788220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.788239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.788575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.788595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.788911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.788930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.789250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.789269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.789497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.789516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.789802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.789822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.790029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.790048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.790349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.790368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.790650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.639 [2024-07-25 12:16:56.790670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.639 qpair failed and we were unable to recover it. 00:30:19.639 [2024-07-25 12:16:56.790988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.791008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.791262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.791281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.791596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.791621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.791953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.791972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.792286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.792306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.792598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.792623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.792904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.792923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.793233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.793251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.793509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.793528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.793835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.793855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.794189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.794208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.794520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.794539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.794832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.794852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.795075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.795098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.795407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.795426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.795774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.795794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.796097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.796116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.796455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.796473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.796625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.796646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.796816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.796835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.797147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.797166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.797374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.797393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.797706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.797726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.798009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.798028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.798309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.798327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.798614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.798634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.798846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.798865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.799160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.799179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.799417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.640 [2024-07-25 12:16:56.799436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.640 qpair failed and we were unable to recover it. 00:30:19.640 [2024-07-25 12:16:56.799726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.799746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.799985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.800004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.800282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.800302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.800616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.800635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.800943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.800962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.801216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.801235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.801548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.801567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.801775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.801795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.802081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.802100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.802398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.802417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.802774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.802793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.803127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.803147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.803428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.803447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.803698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.803718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.803923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.803943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.804160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.804178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.804502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.804521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.804801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.804820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.805115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.805134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.805492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.805511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.805839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.805859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.806141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.806160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.806390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.806408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.806617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.806637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.806864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.806887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.807170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.807188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.807521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.807540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.807852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.807872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.808195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.808214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.808420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.808439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.808720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.641 [2024-07-25 12:16:56.808739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.641 qpair failed and we were unable to recover it. 00:30:19.641 [2024-07-25 12:16:56.808950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.808969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.809263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.809281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.809570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.809589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.809871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.809890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.810196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.810216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.810367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.810386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.810724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.810743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.811009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.811029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.811185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.811204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.811432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.811451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.811731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.811751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.812058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.812077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.812313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.812333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.812639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.812658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.812939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.812958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.813265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.813284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.813616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.813636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.813956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.813975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.814196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.814215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.814551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.814570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.814825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.814844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.815152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.815170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.815395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.815414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.815693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.815713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.815941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.815960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.816094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.816114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.816408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.816427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.816770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.816790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.817070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.817088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.817324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.817342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.817653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.817673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.818006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.818025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.818347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.818366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.818652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.818675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.818894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.818913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.819219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.819238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.819578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.642 [2024-07-25 12:16:56.819597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.642 qpair failed and we were unable to recover it. 00:30:19.642 [2024-07-25 12:16:56.819839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.819858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.820188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.820208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.820560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.820579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.820878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.820899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.821106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.821125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.821423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.821442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.821724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.821743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.822061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.822080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.822303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.822322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.822574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.822594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.822826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.822845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.822999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.823018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.823301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.823320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.823617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.823637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.823941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.823960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.824263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.824282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.824629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.824648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.824978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.824997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.825218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.825237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.825454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.825473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.825779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.825798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.825936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.825956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.826169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.826188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.826509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.826528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.826837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.826856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.827081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.827100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.827351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.827370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.827595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.827621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.827846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.827865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.828090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.828109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.828339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.828358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.828591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.828624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.828933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.828952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.829281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.829300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.829539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.829557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.643 qpair failed and we were unable to recover it. 00:30:19.643 [2024-07-25 12:16:56.829839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.643 [2024-07-25 12:16:56.829859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.830208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.830231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.830528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.830547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.830879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.830899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.831110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.831129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.831373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.831392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.831700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.831719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.831938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.831957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.832094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.832113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.832416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.832435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.832656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.832675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.832934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.832964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.833207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.833237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.833489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.833508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.833809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.833829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.834170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.834189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.834522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.834553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.834894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.834930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.835240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.835270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.835515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.835545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.835806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.835826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.836133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.836153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.836488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.836519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.836827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.836859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.837181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.837212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.837538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.837568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.837848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.837880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.644 qpair failed and we were unable to recover it. 00:30:19.644 [2024-07-25 12:16:56.838247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.644 [2024-07-25 12:16:56.838277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.838628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.838662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.838937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.838967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.839207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.839237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.839569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.839600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.839857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.839877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.840160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.840179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.840383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.840402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.840705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.840725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.841057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.841075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.841317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.841361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.841694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.841726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.842000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.842031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.842211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.842242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.842573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.842614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.842930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.842949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.843179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.843198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.843509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.843527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.843816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.843836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.844118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.844137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.844461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.844480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.844785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.844805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.845150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.845180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.845465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.845496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.845829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.845848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.846153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.846172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.846425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.846456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.846827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.846858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.847197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.847227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.847559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.847589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.847907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.847938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.848254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.848285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.848596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.848639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.848980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.849022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.849275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.849306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.849662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.849695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.850081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.645 [2024-07-25 12:16:56.850111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.645 qpair failed and we were unable to recover it. 00:30:19.645 [2024-07-25 12:16:56.850353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.850372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.850547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.850565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.850874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.850894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.851171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.851190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.851444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.851480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.851841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.851873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.852137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.852168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.852412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.852443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.852748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.852780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.853106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.853137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.853446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.853466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.853672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.853692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.853899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.853918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.854229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.854248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.854581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.854626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.854934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.854964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.855275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.855306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.855625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.855657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.855944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.855975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.856230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.856261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.856501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.856520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.856737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.856757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.857061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.857080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.857429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.857459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.857710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.857743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.858047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.858077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.858416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.858447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.858693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.858713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.858993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.859012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.859308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.859327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.859641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.859673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.860006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.860036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.860288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.860319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.646 [2024-07-25 12:16:56.860577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.646 [2024-07-25 12:16:56.860615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.646 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.860929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.860960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.861288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.861318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.861590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.861618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.861901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.861932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.862288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.862319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.862635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.862668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.862919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.862950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.863239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.863270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.863590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.863616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.863972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.863991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.647 [2024-07-25 12:16:56.864354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.647 [2024-07-25 12:16:56.864391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.647 qpair failed and we were unable to recover it. 00:30:19.926 [2024-07-25 12:16:56.864725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.926 [2024-07-25 12:16:56.864758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.926 qpair failed and we were unable to recover it. 00:30:19.926 [2024-07-25 12:16:56.864957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.926 [2024-07-25 12:16:56.864989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.865301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.865332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.865690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.865736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.866089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.866121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.866386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.866418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.866627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.866660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.866969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.866988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.867292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.867311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.867641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.867661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.867884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.867904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.868216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.868235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.868568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.868586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.868906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.868926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.869163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.869181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.869485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.869504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.869733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.869754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.870068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.870087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.870395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.870414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.870654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.870692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.871036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.871067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.871338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.871369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.871704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.871737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.872072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.872103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.872336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.872367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.872635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.872667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.872952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.872983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.873347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.873377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.873641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.873673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.874021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.874052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.874386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.874416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.874722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.874754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.875061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.875093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.875416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.875448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.875790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.875823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.876095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.876126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.876402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.876433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.927 [2024-07-25 12:16:56.876734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.927 [2024-07-25 12:16:56.876774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.927 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.877078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.877097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.877337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.877359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.877668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.877688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.877970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.878001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.878364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.878394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.878738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.878770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.879040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.879071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.879320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.879350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.879613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.879645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.879895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.879915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.880142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.880161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.880390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.880409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.880645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.880665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.880892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.880911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.881191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.881211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.881547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.881566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.881802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.881849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.882094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.882125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.882440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.882484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.882724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.882744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.883041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.883061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.883337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.883357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.883654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.883673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.884029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.884060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.884395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.884425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.884762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.884798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.885139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.885171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.885473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.885503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.885846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.885878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.886126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.886157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.886460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.886491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.886790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.886822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.887149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.887179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.887368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.887399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.887708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.887739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.888056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.888087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.928 [2024-07-25 12:16:56.888405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.928 [2024-07-25 12:16:56.888437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.928 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.888688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.888720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.888976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.889007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.889309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.889339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.889663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.889683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.889999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.890022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.890341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.890375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.890726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.890758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.891093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.891124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.891429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.891460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.891768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.891788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.892114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.892133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.892487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.892518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.892849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.892881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.893217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.893248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.893524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.893554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.893892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.893912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.894215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.894234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.894564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.894583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.894850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.894882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.895240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.895271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.895600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.895640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.895966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.895997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.896262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.896293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.896551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.896582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.896950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.896982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.897283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.897314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.897663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.897706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.897958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.897988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.898321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.898352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.898688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.898719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.899025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.899056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.899382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.899413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.899616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.899647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.899958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.899977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.900277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.900296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.900538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.900558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.929 qpair failed and we were unable to recover it. 00:30:19.929 [2024-07-25 12:16:56.900879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.929 [2024-07-25 12:16:56.900899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.901186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.901217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.901545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.901576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.901835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.901867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.902223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.902254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.902584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.902627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.902961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.902992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.903322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.903352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.903614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.903657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.903992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.904012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.904238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.904257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.904573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.904621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.904938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.904969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.905234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.905265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.905519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.905550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.905933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.905965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.906212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.906242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.906547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.906577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.906848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.906881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.907227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.907258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.907591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.907633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.907969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.907999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.908333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.908364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.908697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.908728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.909064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.909094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.909325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.909356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.909695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.909726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.910051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.910070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.910385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.910404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.910625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.910644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.910854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.910873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.911217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.911236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.911542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.911573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.911934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.911965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.912324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.912355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.912662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.912682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.912995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.913014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.930 qpair failed and we were unable to recover it. 00:30:19.930 [2024-07-25 12:16:56.913246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.930 [2024-07-25 12:16:56.913265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.913487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.913506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.913783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.913803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.914148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.914168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.914446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.914465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.914699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.914718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.915016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.915034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.915368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.915387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.915696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.915728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.916065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.916095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.916423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.916454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.916760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.916797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.917067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.917097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.917458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.917488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.917793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.917824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.918080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.918111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.918484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.918514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.918818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.918850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.919116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.919147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.919512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.919543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.919904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.919936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.920187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.920218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.920550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.920581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.920936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.920968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.921301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.921332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.921643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.921675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.921929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.921959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.922226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.922257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.922509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.922540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.922899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.922932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.923181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.923212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.931 [2024-07-25 12:16:56.923571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.931 [2024-07-25 12:16:56.923610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.931 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.923912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.923931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.924259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.924298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.924616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.924648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.924994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.925025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.925354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.925395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.925682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.925701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.925953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.925972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.926146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.926165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.926493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.926524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.926865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.926896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.927197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.927227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.927554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.927585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.927879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.927911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.928143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.928174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.928514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.928545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.928893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.928925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.929185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.929216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.929572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.929614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.929866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.929896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.930250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.930287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.930643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.930683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.931004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.931035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.931309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.931340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.931667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.931698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.931939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.931958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.932161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.932181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.932485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.932504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.932786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.932807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.933058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.933077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.933387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.933406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.933635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.933655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.933882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.933902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.934041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.934059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.934368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.934398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.934732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.934765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.935024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.935045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.932 [2024-07-25 12:16:56.935324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.932 [2024-07-25 12:16:56.935343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.932 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.935653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.935674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.935918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.935938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.936177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.936196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.936479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.936510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.936848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.936880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.937148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.937179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.937366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.937396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.937644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.937663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.937982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.938001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.938333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.938364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.938674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.938706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.938968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.939279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.939309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.939560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.939592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.939900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.939931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.940255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.940287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.940594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.940636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.940958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.940989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.941346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.941377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.941702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.941732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.942030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.942049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.942302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.942320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.942547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.942569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.942822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.942842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.943120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.943139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.943398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.943417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.943725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.943745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.944027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.944045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.944396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.944414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.944586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.944687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.944928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.944948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.945181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.945200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.945407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.945426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.945701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.945720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.946027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.946046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.946337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.946356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.933 [2024-07-25 12:16:56.946664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.933 [2024-07-25 12:16:56.946684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.933 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.946917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.946948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.947305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.947337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.947670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.947702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.947957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.947988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.948254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.948285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.948645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.948677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.949002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.949033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.949277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.949308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.949643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.949674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.949979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.949998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.950349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.950379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.950725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.950758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.951090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.951122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.951401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.951431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.951803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.951843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.952178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.952209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.952543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.952573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.952917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.952937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.953162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.953181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.953430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.953449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.953728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.953747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.954061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.954081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.954299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.954319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.954632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.954665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.954998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.955030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.955264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.955301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.955550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.955583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.955916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.955948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.956199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.956230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.956484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.956515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.956823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.956843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.957173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.957211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.957487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.957517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.957772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.957804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.958104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.958134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.958458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.934 [2024-07-25 12:16:56.958488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.934 qpair failed and we were unable to recover it. 00:30:19.934 [2024-07-25 12:16:56.958809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.958841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.959182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.959201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.959517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.959547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.959895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.959914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.960147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.960166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.960417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.960436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.960737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.960757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.960988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.961006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.961278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.961298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.961627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.961646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.961928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.961947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.962167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.962186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.962473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.962492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.962655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.962675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.962982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.963013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.963266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.963297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.963564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.963596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.963957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.963989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.964346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.964376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.964627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.964659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.965015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.965045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.965401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.965432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.965673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.965695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.966009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.966029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.966255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.966275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.966513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.966532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.966699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.966718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.967003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.967034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.967320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.967351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.967678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.967704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.967985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.968004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.968228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.968248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.968545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.968564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.968902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.968922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.969189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.969219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.969532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.969562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.969787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.969808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.935 [2024-07-25 12:16:56.970093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.935 [2024-07-25 12:16:56.970112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.935 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.970339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.970358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.970611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.970631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.970861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.970880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.971086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.971105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.971415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.971434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.971773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.971805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.972090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.972121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.972434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.972465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.972717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.972749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.973030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.973049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.973287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.973307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.973630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.973650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.973879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.973899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.974180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.974199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.974487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.974506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.974822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.974842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.975088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.975107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.975276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.975299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.975635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.975655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.975808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.975827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.976049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.976068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.976289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.976307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.976563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.976594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.976938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.976970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.977169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.977200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.977457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.977488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.977721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.977741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.977915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.977934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.978243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.978273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.978524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.978554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.978827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.978846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.936 [2024-07-25 12:16:56.979063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.936 [2024-07-25 12:16:56.979087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.936 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.979424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.979442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.979782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.979814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.980141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.980172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.980460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.980490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.980804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.980837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.981116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.981147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.981317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.981347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.981583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.981641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.981949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.981980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.982224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.982243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.982551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.982570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.982821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.982841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.983070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.983089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.983398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.983417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.983576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.983595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.983916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.983935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.984247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.984266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.984492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.984511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.984822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.984842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.985068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.985089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.985415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.985446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.985677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.985709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.986018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.986048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.986315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.986348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.986654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.986686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.987019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.987049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.987401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.987432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.987680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.987712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.988042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.988072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.988387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.988418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.988733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.988766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.989021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.989052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.989356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.989387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.989714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.989746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.990075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.990106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.990449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.990480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.937 [2024-07-25 12:16:56.990790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.937 [2024-07-25 12:16:56.990810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.937 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.991028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.991048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.991329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.991348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.991646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.991669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.992023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.992062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.992347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.992378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.992688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.992721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.993077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.993107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.993369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.993399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.993758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.993790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.994156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.994186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.994426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.994457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.994784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.994816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.995067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.995086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.995309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.995328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.995622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.995642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.995931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.995951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.996176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.996195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.996441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.996459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.996794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.996814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.997111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.997130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.997361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.997392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.997705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.997737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.998078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.998119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.998371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.998402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.998637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.998668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.999007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.999038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.999352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.999383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.999701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:56.999761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:56.999999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.000018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.000257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.000276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.000592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.000629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.000865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.000883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.001174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.001192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.001402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.001420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.001568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.001587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.001865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.001884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.938 [2024-07-25 12:16:57.002075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.938 [2024-07-25 12:16:57.002106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.938 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.002439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.002470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.002785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.002817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.003156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.003186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.003536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.003577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.003950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.003982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.004339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.004370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.004710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.004742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.004943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.004974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.005292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.005322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.005615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.005648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.005974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.006004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.006313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.006343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.006623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.006655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.006963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.006982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.007290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.007308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.007633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.007653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.007904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.007922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.008209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.008228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.008486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.008517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.008764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.008796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.009125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.009143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.009440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.009459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.009793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.009812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.010125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.010156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.010485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.010515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.010824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.010856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.011109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.011140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.011442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.011473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.011812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.011858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.012142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.012161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.012511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.012542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.012874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.012905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.013192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.013229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.013561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.013592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.013948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.013968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.014186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.014205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.014523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.014554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.939 qpair failed and we were unable to recover it. 00:30:19.939 [2024-07-25 12:16:57.014880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.939 [2024-07-25 12:16:57.014911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.015159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.015189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.015530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.015560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.015850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.015882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.016219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.016250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.016504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.016534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.016690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.016721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.016961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.016979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.017257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.017276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.017589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.017617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.017908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.017950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.018142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.018172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.018429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.018460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.018728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.018759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.019094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.019124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.019460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.019490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.019802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.019835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.020107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.020137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.020399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.020429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.020790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.020810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.020989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.021008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.021262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.021280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.021504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.021536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.021847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.021878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.022194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.022225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.022407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.022438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.022691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.022722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.022958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.022988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.023319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.023350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.023540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.023571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.023753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.023785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.024122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.024153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.024489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.024509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.024735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.024757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.024986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.940 [2024-07-25 12:16:57.025006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.940 qpair failed and we were unable to recover it. 00:30:19.940 [2024-07-25 12:16:57.025214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.025237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.025431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.025450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.025699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.025718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.025936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.025955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.026184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.026202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.026419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.026438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.026582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.026600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.026832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.026851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.027097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.027116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.027418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.027439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.027642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.027662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.027805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.027825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.028063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.028082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.028289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.028307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.028618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.028638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.028879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.028898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.029025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.029044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.029212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.029231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.029372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.029391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.029541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.029560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.029806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.029826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.030043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.030064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.030286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.030306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.030535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.030554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.030830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.030850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.031021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.031040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.031328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.031347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.031585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.031612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.031839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.031858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.032109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.032128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.032379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.032398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.032684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.032705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.032942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.032962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.033254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.033273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.033516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.033535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.941 [2024-07-25 12:16:57.033817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.941 [2024-07-25 12:16:57.033837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.941 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.034118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.034137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.034298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.034318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.034468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.034488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.034640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.034660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.034881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.034904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.035185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.035207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.035452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.035472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.035779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.035799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.036041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.036059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.036267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.036286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.036427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.036446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.036739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.036759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.036920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.036940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.037184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.037203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.037350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.037369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.037680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.037700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.037917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.037936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.038113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.038133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.038355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.038374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.038628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.038648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.038873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.038893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.039180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.039199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.039308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.039327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.039614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.039634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.039783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.039802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.040009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.040028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.040187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.040207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.040348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.040368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.040664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.040684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.040888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.040908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.041130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.041149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.041375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.041394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.041545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.041563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.041871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.041891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.042097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.042116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.042433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.042452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.042754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.042775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.942 qpair failed and we were unable to recover it. 00:30:19.942 [2024-07-25 12:16:57.042942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.942 [2024-07-25 12:16:57.042962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.043193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.043213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.043414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.043433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.043656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.043675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.043921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.043940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.044053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.044072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.044345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.044364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.044500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.044523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.044726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.044746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.044975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.044994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.045139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.045158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.045435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.045455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.045786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.045806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.046061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.046080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.046308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.046328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.046477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.046496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.046659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.046679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.046896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.046915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.047119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.047138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.047365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.047384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.047617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.047637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.047919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.047939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.048151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.048170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.048501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.048520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.048769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.048789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.048933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.048952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.049104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.049123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.049358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.049377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.049615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.049634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.049768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.049787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.050010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.050029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.050238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.050257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.050491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.050511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.050725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.050745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.050951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.050971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.051173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.051191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.051401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.051420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.051706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.051726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.943 qpair failed and we were unable to recover it. 00:30:19.943 [2024-07-25 12:16:57.051878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.943 [2024-07-25 12:16:57.051897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.052100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.052120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.052363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.052382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.052587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.052625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.052773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.052792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.053012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.053031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.053264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.053283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.053439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.053458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.053619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.053639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.053845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.053867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.054069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.054087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.054375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.054394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.054628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.054648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.054873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.054892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.055113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.055132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.055354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.055373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.055616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.055636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.055830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.055849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.056087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.056106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.056380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.056398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.056619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.056639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.056917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.056936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.057225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.057244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.057551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.057571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.057812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.057838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.058122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.058140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.058442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.058461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.058582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.058608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.058849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.058867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.059139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.059158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.059457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.059476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.059724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.059744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.059964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.059983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.060256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.060274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.060431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.060450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.060684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.060704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.060946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.060965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.061187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.944 [2024-07-25 12:16:57.061207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.944 qpair failed and we were unable to recover it. 00:30:19.944 [2024-07-25 12:16:57.061447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.061467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.061666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.061686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.061962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.061980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.062200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.062219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.062439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.062458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.062657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.062677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.062834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.062854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.062982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.063001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.063229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.063248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.063446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.063465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.063694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.063714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.063929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.063951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.064202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.064221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.064382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.064401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.064680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.064700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.064992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.065011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.065211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.065230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.065374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.065393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.065700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.065720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.065993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.066012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.066229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.066248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.066479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.066497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.066793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.066964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.066983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.067284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.067303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.067451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.067470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.067713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.067732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.068024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.068042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.068245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.068264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.068416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.068434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.068703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.068722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.068866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.068885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.069019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.945 [2024-07-25 12:16:57.069037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.945 qpair failed and we were unable to recover it. 00:30:19.945 [2024-07-25 12:16:57.069325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.069343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.069542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.069560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.069803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.069823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.070116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.070135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.070305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.070323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.070472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.070490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.070706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.070725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.070880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.070899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.071144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.071163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.071365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.071384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.071685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.071704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.071884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.071903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.072142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.072161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.072431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.072449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.072745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.072764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.072973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.072993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.073233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.073251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.073449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.073468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.073768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.073791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.074040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.074058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.074285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.074303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.074504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.074523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.074726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.074745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.074923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.074942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.075082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.075101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.075300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.075319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.075537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.075555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.075852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.075872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.076096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.076114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.076260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.076278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.076526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.076544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.076787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.076806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.077017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.946 [2024-07-25 12:16:57.077036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.946 qpair failed and we were unable to recover it. 00:30:19.946 [2024-07-25 12:16:57.077309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.077327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.077505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.077523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.077826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.077846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.078044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.078063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.078229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.078247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.078408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.078426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.078659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.078679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.078966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.078985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.079190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.079209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.079372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.079391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.079597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.079624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.079861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.079879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.080082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.080101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.080401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.080420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.080570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.080589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.080830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.080849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.081118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.081136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.081269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.081288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.081560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.081579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.081783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.081802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.082076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.082094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.082295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.082314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.082464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.082482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.082764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.082784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.082998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.083017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.083255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.083277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.083481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.083500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.083629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.083648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.083775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.083793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.084097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.084115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.084385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.084403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.084550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.084569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.084794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.084813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.085082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.085103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.085398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.085417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.085574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.085593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.085916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.085935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.086235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.947 [2024-07-25 12:16:57.086254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.947 qpair failed and we were unable to recover it. 00:30:19.947 [2024-07-25 12:16:57.086553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.086571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.086816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.086836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.087055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.087074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.087220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.087239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.087448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.087467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.087619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.087638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.087857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.087875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.088021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.088039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.088284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.088303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.088513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.088532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.088660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.088680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.088898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.088916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.089136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.089154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.089328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.089347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.089556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.089575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.089731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.089751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.090049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.090068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.090184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.090203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.090437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.090456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.090772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.090791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.091026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.091044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.091280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.091299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.091429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.091448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.091739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.091758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.092064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.092083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.092342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.092361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.092509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.092527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.092729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.092752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.092953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.092972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.093247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.093266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.093418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.093436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.093756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.093775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.093937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.093955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.094227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.948 [2024-07-25 12:16:57.094245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.948 qpair failed and we were unable to recover it. 00:30:19.948 [2024-07-25 12:16:57.094513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.094532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.094766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.094785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.094991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.095010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.095135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.095154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.095301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.095319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.095553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.095571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.095792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.095812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.096085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.096104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.096314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.096333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.096676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.096695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.096904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.096922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.097192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.097211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.097499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.097518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.097633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.097652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.097928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.097946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.098240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.098259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.098506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.098525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.098762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.098782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.098918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.098937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.099160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.099179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.099462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.099481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.099694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.099713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.099932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.099963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.100100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.100130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.100431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.100473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.100688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.100707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.100970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.100989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.101156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.101186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.101448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.101479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.101775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.101807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.101968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.101999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.102180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.102210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.102526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.102557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.102808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.102845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.103072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.103091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.103304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.103322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.103560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.103579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.949 qpair failed and we were unable to recover it. 00:30:19.949 [2024-07-25 12:16:57.103759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.949 [2024-07-25 12:16:57.103778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.103944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.103975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.104198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.104228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.104452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.104483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.104635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.104668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.104825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.104856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.105117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.105147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.105459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.105490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.105678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.105710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.105960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.105990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.106234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.106265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.106484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.106514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.106809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.106841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.107013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.107032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.107240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.107270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.107526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.107556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.107819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.107851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.108100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.108118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.108267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.108285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.108521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.108551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.108870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.108902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.109196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.109227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.109408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.109438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.109764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.109797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.110038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.110068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.110304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.110334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.950 [2024-07-25 12:16:57.110509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.950 [2024-07-25 12:16:57.110539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.950 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.110782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.110813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.111032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.111050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.111255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.111274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.111509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.111527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.111750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.111770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.111927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.111945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.112210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.112228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.112497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.112528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.112848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.112879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.113065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.113100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.113322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.113352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.113596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.113633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.113897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.113916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.114078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.114096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.114297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.114315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.114582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.114600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.114756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.114775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.114992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.115022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.115259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.115289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.115513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.115544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.115869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.115900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.116190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.116221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.116452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.116483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.116738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.116770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.117092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.117123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.117271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.117302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.117527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.117558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.117821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.118021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.118039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.118279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.118309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.118555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.118585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.118904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.118935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.119182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.119201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.119438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.119456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.119653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.119672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.119961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.119980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.120198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.120217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.951 qpair failed and we were unable to recover it. 00:30:19.951 [2024-07-25 12:16:57.120380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.951 [2024-07-25 12:16:57.120398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.120663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.120682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.120918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.120948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.121149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.121179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.121413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.121444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.121776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.121808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.122040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.122080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.122372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.122390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.122624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.122643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.122841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.122860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.123103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.123133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.123376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.123407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.123628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.123665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.123923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.123954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.124262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.124305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.124468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.124486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.124778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.124797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.125052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.125072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.125361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.125381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.125593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.125619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.125791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.125810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.126033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.126065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.126229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.126259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.126480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.126511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.126758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.126789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.127019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.127037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.127273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.127294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.127540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.127559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.127757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.127778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.127920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.127939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.128082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.128101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.128379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.128398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.128550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.128568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.128844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.128863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.952 qpair failed and we were unable to recover it. 00:30:19.952 [2024-07-25 12:16:57.129004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.952 [2024-07-25 12:16:57.129022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.129248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.129267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.129484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.129503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.129704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.129735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.129999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.130030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.130255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.130332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.130585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.130630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.130944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.130976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.131294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.131324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.132196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.132236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.132488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.132520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.132754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.132787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.133040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.133070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.133217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.133247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.133535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.133566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.133755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.133786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.134018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.134048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.134303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.134337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.134559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.134589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.134758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.134789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.134950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.134980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.135257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.135290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.135521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.135540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.135806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.135826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.136023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.136041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.136259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.136277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.136410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.136428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.136590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.136616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.136841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.136859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.137068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.137086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.137288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.137306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.137443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.137462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.137617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.137636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.137894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.137912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.138074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.138093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.138305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.138323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.138647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.138666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.138957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.953 [2024-07-25 12:16:57.138975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.953 qpair failed and we were unable to recover it. 00:30:19.953 [2024-07-25 12:16:57.139103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.139121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.139267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.139286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.139434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.139452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.139726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.139746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.139978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.140010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.140201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.140232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.140392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.140423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.140584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.140641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.140895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.140925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.141100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.141131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.141302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.141321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.141529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.141547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.141817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.141848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.142085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.142116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.142277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.142307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.142614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.142633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.142907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.142925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.143127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.143145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.143307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.143325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.143534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.143564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.143838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.143870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.144034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.144265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.144283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.144478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.144496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.144698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.144716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.144914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.144933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.145194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.145212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.145348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.145366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.145597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.145640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.145812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.146066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.146097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.146322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.146353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.146528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.146558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.146790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.146809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.146957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.146975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.147181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.147200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.147452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.147483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.954 qpair failed and we were unable to recover it. 00:30:19.954 [2024-07-25 12:16:57.147793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.954 [2024-07-25 12:16:57.147825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.148063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.148093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.148249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.148267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.148392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.148410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.148612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.148631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.148730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.148749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.148876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.148894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.149026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.149044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.149259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.149289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.149511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.149541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.149712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.149748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.150066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.150105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.150315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.150334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.150550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.150569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.150813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.150832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.151097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.151116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.151307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.151325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.151479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.151497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.151705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.151725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.151855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.151873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.152030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.152048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.152239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.152257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.152573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.152628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.152870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.152901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.153135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.153165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.153398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.153428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.153696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.153715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.153929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.153947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.154242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.154260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.154430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.154448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.154609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.154651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.955 qpair failed and we were unable to recover it. 00:30:19.955 [2024-07-25 12:16:57.154901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.955 [2024-07-25 12:16:57.154931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.155084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.155114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.155348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.155366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.155559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.155578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.155885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.155904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.156057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.156074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.156298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.156329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.156563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.156593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.156837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.156868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.157017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.157047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.157272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.157302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.157554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.157572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.157792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.157811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.157922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.157940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.158149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.158167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.158454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.158472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.158746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.158765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.159037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.159067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.159227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.159257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.159513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.159548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.159789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.159821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.160074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.160104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.160349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.160367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.160501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.160519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.160645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.160664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.160944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.160963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.161181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.161200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.161467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.161485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.161717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.161736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.161934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.161952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.162097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.162115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.162379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.162409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.162596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.162635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.162807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.162837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.163000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.163031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.163316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.163345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.163587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.163611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.163811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.956 [2024-07-25 12:16:57.163829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-07-25 12:16:57.164121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.164139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.164282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.164300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.164510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.164529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.164733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.164752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.164974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.164993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.165107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.165125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.165268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.165286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.165587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.165627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.165978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.166009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.166233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.166264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.166460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.166490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.166746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.166777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.166997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.167027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.167319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.167349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.167582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.167621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.167858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.167888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.168059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.168089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.168391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.168421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.168722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.168754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.169037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.169067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.169288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.169318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.169540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.169561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.169717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.169736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.170016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.170046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.170274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.170304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.170597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.170636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.170790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.170820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.170998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.171028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.171320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.171338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.171535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.171553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.171769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.171787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.171999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.172017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.172284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.172302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.172510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.172528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.172725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.172743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.173044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.173075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.173304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.173334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-07-25 12:16:57.173610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.957 [2024-07-25 12:16:57.173629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.173920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.173938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.174233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.174263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.174497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.174527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.174706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.174738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.175026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.175056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.175360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.175379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.175643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.175661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.175876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.175894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.176161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.176179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.176505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.176535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.176703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.176735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.176973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.177002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.177269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.177299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.177528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.177546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.177758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.177777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.177989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.178007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.178216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.178234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.178436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.178466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.178723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.178754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.178909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.178939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.179228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.179258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.179553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.179584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.179843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.179874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.180187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.180222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.180526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.180556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.180864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.180895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.181136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.181166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.181396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.181414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.181623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.181654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.181851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.181882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.182029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.182059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.182395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.182413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.182624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.182643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.182858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.182876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.183155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.183173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.183356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.183374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.183620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.183651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.958 qpair failed and we were unable to recover it. 00:30:19.958 [2024-07-25 12:16:57.183887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.958 [2024-07-25 12:16:57.183917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.184152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.184182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.184469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.184499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.184772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.184791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.185027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.185045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.185239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.185259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.185408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.185443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.185678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.185709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.186000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.186030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.186282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.186312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.186535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.186552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.186760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.186779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.187048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.187079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.187318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.187349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.187511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.187541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.187871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.187902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.188226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.188270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.188532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.188550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.188796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.188814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.188940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.188958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.189103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.189121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.189311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.189330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.189567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.189585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.189854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.189872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.190025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.190044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.190343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.190374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.190616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.190648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.191001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.191031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.191281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.191311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.191502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.191520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.191730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.191762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.192083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.192113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.192427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.192457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.192697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.959 [2024-07-25 12:16:57.192728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.959 qpair failed and we were unable to recover it. 00:30:19.959 [2024-07-25 12:16:57.192890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.192920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.193101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.193131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.193370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.193400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.193727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.193758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.194061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.194091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.194415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.194445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.194681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.194713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.194920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.194950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.195269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.195298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.195485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.195503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.195741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.195759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.195971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.195989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.196254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.196272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.196500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.196518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.196812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.196830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.197042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.197060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.197392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.197429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.197749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.197780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.197951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.197982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.198291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.198326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.198590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.198613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.198875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.198893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.199126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.199144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.199365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.199383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.199520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.199538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.199691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.199710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.199925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.199943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.200237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.200271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.200580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.200619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.200853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.200883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.201172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.201202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.201374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.201404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.201691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.201710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.202012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.202030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.202321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.202352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.202655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.960 [2024-07-25 12:16:57.202687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.960 qpair failed and we were unable to recover it. 00:30:19.960 [2024-07-25 12:16:57.202974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.203004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.203312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.203342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.203662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.203694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.203931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.203961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.204179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.204209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.204452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.204470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.204629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.204648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.204879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.204910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.205150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.205180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.205413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.205443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.205817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.205848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.206103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.206132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.206377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.206407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.206633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.206664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:19.961 [2024-07-25 12:16:57.206980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.961 [2024-07-25 12:16:57.207010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.961 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.207295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.207326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.207565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.207584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.207713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.207746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.207961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.207979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.208191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.208209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.208417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.208448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.208623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.208654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.208922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.208952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.209198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.209219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.209420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.209438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.209652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.209684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.209893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.209923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.210218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.210237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.210450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.210469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.210758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.210778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.210911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.210930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.211206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.211237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.211473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.211503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.211778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.211809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.212028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.212058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.212288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.212307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.212521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.212539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.212758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.212777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.212994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.213024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.213257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.213287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.213600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.213658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.213954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.213984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.242 [2024-07-25 12:16:57.214224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.242 [2024-07-25 12:16:57.214255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.242 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.214432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.214462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.214751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.214783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.215107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.215147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.215289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.215308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.215573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.215611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.215786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.215817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.216047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.216077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.216347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.216365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.216558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.216576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.216787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.216806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.217081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.217099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.217257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.217276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.217509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.217539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.217870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.217902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.218150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.218180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.218346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.218376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.218591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.218617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.218826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.218843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.219128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.219146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.219372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.219390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.219652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.219674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.219937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.219956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.220103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.220121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.220328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.220347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.220594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.220647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.220950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.220980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.221320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.221350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.221587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.221632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.221925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.221942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.222148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.222166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.222320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.222338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.222547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.222576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.222749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.222780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.223017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.223047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.223366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.223384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.243 qpair failed and we were unable to recover it. 00:30:20.243 [2024-07-25 12:16:57.223623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.243 [2024-07-25 12:16:57.223655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.223841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.223871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.224091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.224121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.224349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.224378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.224664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.224683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.224971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.224989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.225201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.225219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.225514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.225532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.225824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.225842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.225991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.226009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.226272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.226290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.226502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.226520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.226693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.226712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.226981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.227011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.227323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.227354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.227520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.227550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.227786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.227805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.227951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.227969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.228191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.228209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.228457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.228488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.228640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.228672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.228902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.228933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.229243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.229261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.229450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.229468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.229739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.229758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.230047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.230072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.230293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.230311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.230534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.230552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.230767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.230786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.230951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.230969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.231208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.231238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.231455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.231783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.231801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.232042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.232059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.232269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.232287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.232498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.232515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.232805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.244 [2024-07-25 12:16:57.232824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.244 qpair failed and we were unable to recover it. 00:30:20.244 [2024-07-25 12:16:57.233037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.233055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.233338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.233355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.233596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.233621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.233782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.233801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.234063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.234081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.234346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.234377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.234553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.234583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.234855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.234887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.235139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.235168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.235458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.235492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.235662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.235694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.235915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.235945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.236268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.236298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.236532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.236550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.236862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.236881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.237181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.237212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.237443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.237473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.237707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.237738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.237964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.237995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.238160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.238190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.238478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.238508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.238762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.238793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.239030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.239061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.239236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.239254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.239454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.239485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.239798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.239829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.240077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.240116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.240330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.240348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.240475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.240496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.240700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.240719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.240878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.240896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.241203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.241233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.241467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.241486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.241653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.241671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.241843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.245 [2024-07-25 12:16:57.241873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.245 qpair failed and we were unable to recover it. 00:30:20.245 [2024-07-25 12:16:57.242110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.242140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.242429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.242459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.242679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.242710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.242959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.242990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.243278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.243309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.243482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.243500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.243714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.243746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.244081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.244111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.244433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.244463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.244773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.244792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.244941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.244959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.245230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.245260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.245447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.245477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.245793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.245824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.246128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.246159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.246412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.246442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.246620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.246652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.246969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.246999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.247218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.247248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.247468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.247498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.247735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.247754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.248043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.248060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.248371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.248389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.248592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.248623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.248941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.248958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.249164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.249182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.249335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.249353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.249572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.249612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.249811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.249841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.250010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.250040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.250305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.250335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.250564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.250594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.250815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.250833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.246 [2024-07-25 12:16:57.251060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.246 [2024-07-25 12:16:57.251081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.246 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.251319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.251349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.251640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.251672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.251962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.251992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.252231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.252261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.252448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.252479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.252767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.252785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.252939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.252957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.253150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.253168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.253385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.253415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.253768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.253799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.254027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.254058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.254311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.254342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.254630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.254668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.254935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.254953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.255243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.255261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.255524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.255542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.255765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.255785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.255983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.256001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.256146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.256164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.256455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.256499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.256824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.256856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.257037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.257067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.257374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.257404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.257648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.257667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.247 [2024-07-25 12:16:57.257886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.247 [2024-07-25 12:16:57.257904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.247 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.258108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.258126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.258277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.258312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.258552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.258582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.258771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.258802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.259093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.259123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.259434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.259452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.259573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.259591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.259753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.259790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.260107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.260137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.260454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.260484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.260699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.260731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.260968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.260986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.261207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.261226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.261513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.261531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.261738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.261760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.261956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.261974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.262122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.262140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.262404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.262434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.262740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.262771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.263021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.263051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.263293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.263323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.263637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.263656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.263895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.263913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.264142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.264160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.264456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.264474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.264686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.264705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.264926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.264944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.265088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.265106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.265422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.265441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.265652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.265671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.265911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.265929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.266131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.266149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.266411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.266429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.266638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.266656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.266944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.248 [2024-07-25 12:16:57.266963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.248 qpair failed and we were unable to recover it. 00:30:20.248 [2024-07-25 12:16:57.267172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.267190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.267418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.267448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.267686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.267717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.267949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.267979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.268294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.268337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.268496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.268514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.268763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.268782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.269044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.269062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.269329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.269347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.269542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.269560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.269781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.269800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.270094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.270112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.270400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.270438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.270660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.270692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.271006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.271036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.271289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.271319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.271610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.271629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.271892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.271910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.272136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.272154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.272413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.272435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.272628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.272647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.272774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.272792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.273084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.273103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.273308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.273326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.273533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.273551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.273682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.273701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.273918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.273949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.274168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.274198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.274483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.274501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.274643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.274661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.274796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.274814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.275032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.275050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.275338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.275356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.275641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.275660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.275857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.275875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.249 [2024-07-25 12:16:57.276086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.249 [2024-07-25 12:16:57.276104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.249 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.276368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.276387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.276534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.276552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.276822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.276841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.277107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.277125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.277239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.277258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.277530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.277548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.277838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.277856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.278122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.278140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.278345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.278363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.278635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.278654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.278894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.278913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.279160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.279178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.279452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.279470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.279685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.279703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.279916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.279934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.280142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.280160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.280451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.280469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.280704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.280723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.280961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.280980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.281112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.281129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.281293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.281311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.281475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.281493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.281702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.281722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.281873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.281897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.282045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.282063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.282257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.282275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.282486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.282719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.282738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.283027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.283046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.283255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.283273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.283408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.283426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.283634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.283653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.283846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.283864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.284155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.284174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.284409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.284427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.284636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.250 [2024-07-25 12:16:57.284656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.250 qpair failed and we were unable to recover it. 00:30:20.250 [2024-07-25 12:16:57.284849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.284868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.285106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.285124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.285334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.285355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.285513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.285531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.285684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.285703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.285819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.285837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.286048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.286066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.286288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.286307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.286533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.286551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.286755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.286774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.287036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.287055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.287261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.287279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.287419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.287438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.287701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.287720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.287860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.287879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.288116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.288134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.288368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.288386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.288593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.288619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.288830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.288848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.289057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.289075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.289219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.289237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.289525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.289544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.289890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.289910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.290044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.290062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.290264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.290281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.290423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.290441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.290732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.290751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.291013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.291035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.291267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.291285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.291514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.291532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.291726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.291744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.292028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.292047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.292292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.292311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.292463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.292481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.292822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.292841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.293104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.293122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.251 qpair failed and we were unable to recover it. 00:30:20.251 [2024-07-25 12:16:57.293255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.251 [2024-07-25 12:16:57.293273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.252 qpair failed and we were unable to recover it. 00:30:20.252 [2024-07-25 12:16:57.293536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.252 [2024-07-25 12:16:57.293554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.252 qpair failed and we were unable to recover it. 00:30:20.252 [2024-07-25 12:16:57.293766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.252 [2024-07-25 12:16:57.293785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.252 qpair failed and we were unable to recover it. 00:30:20.252 [2024-07-25 12:16:57.293992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.252 [2024-07-25 12:16:57.294010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.252 qpair failed and we were unable to recover it. 00:30:20.252 [2024-07-25 12:16:57.294284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.252 [2024-07-25 12:16:57.294302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.252 qpair failed and we were unable to recover it. 00:30:20.252 [2024-07-25 12:16:57.294456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.294474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.294637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.294656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.294944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.294962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.295167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.295185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.295390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.295408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.295696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.295714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.295980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.295998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.296199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.296218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.296329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.296347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.296592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.296616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.296772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.296791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.297003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.297021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.297264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.297282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.297420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.297438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.297646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.297665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.297834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.297853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.298174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.298193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.298344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.298363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.298645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.298664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.298973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.298991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.299288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.299306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.299458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.299476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.299712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.299731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.300026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.300044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.300254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.300273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.300431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.300449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.300661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.300683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.300896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.300914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.301119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.301137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.301292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.301311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.301511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.301529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.301728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.301748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.301955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.301973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.302189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.302207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.302375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.302394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.302619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.302638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.253 [2024-07-25 12:16:57.302836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.253 [2024-07-25 12:16:57.302855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.253 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.303003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.303021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.303282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.303300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.303518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.303536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.303750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.303769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.303977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.303995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.304212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.304231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.304371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.304390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.304609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.304628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.304782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.304800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.304921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.304940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.305131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.305149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.305300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.305319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.305495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.305514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.305776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.305796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.306075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.306094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.306244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.306262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.306514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.306533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.306754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.306772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.307074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.307092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.307370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.307388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.307702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.307721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.307934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.307953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.308164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.308182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.308306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.308324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.308610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.308628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.308847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.308865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.309076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.309095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.309303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.309321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.309534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.309552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.309841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.309863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.310065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.310084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.310287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.310305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.310462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.310481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.310681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.310700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.310961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.310980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.311243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.254 [2024-07-25 12:16:57.311261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.254 qpair failed and we were unable to recover it. 00:30:20.254 [2024-07-25 12:16:57.311475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.311494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.311810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.311829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.312120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.312139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.312333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.312351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.312566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.312585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.312801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.312820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.312990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.313008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.313275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.313294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.313531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.313550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.313701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.313720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.313961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.313979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.314107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.314125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.314430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.314448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.314659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.314677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.314913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.314931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.315196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.315214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.315480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.315498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.315726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.315745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.315955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.315973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.316261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.316279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.316547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.316565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.316763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.316782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.316950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.316968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.317163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.317181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.317416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.317435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.317560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.317578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.317803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.317822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.318019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.318037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.318237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.318255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.318518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.318537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.318735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.318754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.318952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.318971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.319114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.319132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.319396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.319414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.319612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.319631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.319789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.319807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.320072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.255 [2024-07-25 12:16:57.320091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.255 qpair failed and we were unable to recover it. 00:30:20.255 [2024-07-25 12:16:57.320323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.320341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.320575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.320593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.320745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.320764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.320992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.321010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.321202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.321220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.321488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.321506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.321655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.321674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.321876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.321894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.322051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.322069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.322280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.322298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.322578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.322596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.322895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.322914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.323042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.323061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.323272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.323291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.323429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.323447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.323648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.323667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.323798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.323816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.324024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.324043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.324312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.324330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.324568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.324586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.324793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.324811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.325001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.325019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.325316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.325335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.325529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.325551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.325709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.325727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.325919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.325937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.326161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.326179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.326301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.326320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.326558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.326576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.326815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.326834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.327072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.327090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.327380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.327399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.327631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.327650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.327968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.327986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.328169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.328187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.328340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.328358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.256 qpair failed and we were unable to recover it. 00:30:20.256 [2024-07-25 12:16:57.328500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.256 [2024-07-25 12:16:57.328518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.328822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.328841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.328987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.329005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.329215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.329233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.329439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.329458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.329690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.329709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.329913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.329931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.330031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.330049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.330347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.330365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.330610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.330628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.330824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.330842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.331031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.331049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.331339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.331357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.331565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.331583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.331809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.331828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.332116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.332134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.332263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.332281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.332583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.332609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.332814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.332832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.332965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.332983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.333273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.333291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.333567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.333585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.333764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.333782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.333989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.334007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.334212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.334231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.334517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.334535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.334801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.334821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.335025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.335046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.335336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.335354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.335551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.335572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.335752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.335771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.335984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.336002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.336235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.257 [2024-07-25 12:16:57.336253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.257 qpair failed and we were unable to recover it. 00:30:20.257 [2024-07-25 12:16:57.336455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.336473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.336780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.336798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.337016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.337034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.337165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.337184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.337397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.337415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.337635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.337654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.337880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.337898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.338118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.338136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.338342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.338360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.338597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.338624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.338867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.338886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.339206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.339224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.339554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.339572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.339720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.340030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.340048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.340243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.340261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.340492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.340510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.340710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.340729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.340871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.340889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.341158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.341177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.341368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.341387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.341674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.341693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.341818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.341836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.342122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.342140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.342270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.342289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.342521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.342539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.342771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.342790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.343070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.343089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.343407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.343426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.343566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.343585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.343787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.343806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.344023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.344041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.344254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.344272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.344566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.344585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.344794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.344816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.345124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.345142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.345337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.258 [2024-07-25 12:16:57.345356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.258 qpair failed and we were unable to recover it. 00:30:20.258 [2024-07-25 12:16:57.345570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.345589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.345750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.345770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.345980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.345998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.346234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.346252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.346425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.346444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.346663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.346683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.346782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.346801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.346935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.346953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.347219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.347237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.347365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.347383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.347593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.347626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.347844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.347862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.348065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.348083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.348232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.348250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.348456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.348474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.348678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.348697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.348910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.348928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.349224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.349243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.349523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.349541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.349820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.349839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.350132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.350150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.350403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.350421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.350705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.350725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.350937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.350955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.351107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.351125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.351327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.351346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.351492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.351510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.351652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.351670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.351877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.351895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.352039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.352058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.352256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.352274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.352431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.352449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.352585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.352610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.352847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.352866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.353074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.353092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.353308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.353326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.353463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.353482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.259 qpair failed and we were unable to recover it. 00:30:20.259 [2024-07-25 12:16:57.353626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.259 [2024-07-25 12:16:57.353648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.353841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.353859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.354072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.354090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.354289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.354307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.354512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.354530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.354764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.354784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.355016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.355034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.355310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.355340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.355629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.355660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.355976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.356006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.356236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.356266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.356557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.356587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.356891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.356922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.357262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.357292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.357620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.357652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.357885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.357914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.358087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.358118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.358367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.358397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.358637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.358668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.358954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.358972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.359263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.359281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.359554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.359595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.359847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.359878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.360144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.360175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.360421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.360452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.360677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.360695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.360829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.360847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.360989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.361007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.361270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.361288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.361486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.361504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.361766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.361785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.361938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.361957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.362181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.362211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.362465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.362495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.362748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.362782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.362913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.362932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.363134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.363153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.363349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.260 [2024-07-25 12:16:57.363367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.260 qpair failed and we were unable to recover it. 00:30:20.260 [2024-07-25 12:16:57.363658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.363678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.363817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.363836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.364103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.364125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.364424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.364442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.364650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.364669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.364881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.364899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.365161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.365180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.365397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.365415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.365679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.365699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.365817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.365835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.366129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.366147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.366460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.366479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.366777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.366808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.367064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.367094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.367259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.367289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.367522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.367552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.367866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.367898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.368076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.368106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.368394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.368424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.368721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.368740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.369005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.369024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.369291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.369310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.369516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.369535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.369739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.369758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.369903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.369921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.370209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.370228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.370423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.370441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.370706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.370725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.370929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.370947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.371088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.371106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.371369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.371399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.371691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.371722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.371946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.371964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.372228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.372247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.372526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.372544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.372838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.372857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.373066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.373084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.261 [2024-07-25 12:16:57.373217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.261 [2024-07-25 12:16:57.373236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.261 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.373384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.373402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.373610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.373629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.373778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.373797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.374003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.374033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.374208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.374243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.374530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.374561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.374787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.374807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.375093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.375111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.375322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.375340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.375537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.375555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.375827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.375859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.376113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.376143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.376431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.376462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.376705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.376724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.376965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.376983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.377203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.377221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.377486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.377505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.377766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.377785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.378083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.378101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.378396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.378414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.378732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.378764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.379079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.379110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.379327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.379357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.379676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.379708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.379976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.380006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.380244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.380262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.380471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.380489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.380693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.380712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.380937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.380968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.381273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.381303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.381541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.381572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.381837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.381869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.262 qpair failed and we were unable to recover it. 00:30:20.262 [2024-07-25 12:16:57.382050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.262 [2024-07-25 12:16:57.382068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.382215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.382234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.382435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.382453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.382716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.382735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.382861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.382879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.383031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.383049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.383340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.383359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.383654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.383673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.383888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.383906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.384118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.384136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.384357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.384374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.384594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.384619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.384882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.384904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.385107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.385124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.385277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.385295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.385590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.385634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.385810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.385841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.386082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.386099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.386362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.386381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.386642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.386661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.386884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.386902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.387106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.387124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.387406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.387436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.387657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.387689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.387997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.388027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.388341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.388370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.388672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.388704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.389025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.389055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.389282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.389312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.389635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.389666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.390009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.390040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.390264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.390294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.390536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.390567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.390739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.390771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.391062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.391080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.391209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.391227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.391496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.391526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.391788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.263 [2024-07-25 12:16:57.391819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.263 qpair failed and we were unable to recover it. 00:30:20.263 [2024-07-25 12:16:57.391987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.392005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.392149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.392180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.392472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.392503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.392749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.392781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.393037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.393067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.393220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.393250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.393469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.393500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.393767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.393786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.394084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.394115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.394382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.394412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.394676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.394707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.394943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.394974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.395266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.395297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.395468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.395498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.395786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.395808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.395950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.395969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.396166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.396196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.396445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.396475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.396777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.396808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.397069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.397100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.397338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.397369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.397591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.397630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.397797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.397827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.398077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.398108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.398251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.398281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.398517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.398547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.398875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.398906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.399163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.399194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.399438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.399469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.399711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.399743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.399975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.400005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.400229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.400259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.400516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.400547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.400734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.400766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.401095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.401125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.401276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.401307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.401545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.264 [2024-07-25 12:16:57.401575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.264 qpair failed and we were unable to recover it. 00:30:20.264 [2024-07-25 12:16:57.401810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.401829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.402148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.402179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.402500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.402531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.402778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.402797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.402999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.403030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.403262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.403292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.403527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.403557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.403733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.403753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.404032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.404061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.404381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.404411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.404728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.404748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.404974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.404992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.405222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.405240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.405382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.405400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.405675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.405694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.406013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.406043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.406358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.406388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.406572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.406617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.406912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.406942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.407156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.407186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.407429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.407459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.407681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.407713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.407940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.407971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.408134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.408164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.408385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.408415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.408646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.408678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.408839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.408870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.409101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.409131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.409379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.409409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.409673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.409705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.410048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.410078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.410269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.410300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.410537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.410567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.410822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.410841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.410987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.411005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.411233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.411263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.411510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.411540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.265 [2024-07-25 12:16:57.411815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.265 [2024-07-25 12:16:57.411847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.265 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.412022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.412052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.412346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.412376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.412633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.412665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.412919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.412950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.413191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.413210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.413416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.413434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.413734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.413766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.414091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.414122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.414352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.414382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.414613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.414644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.414890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.414934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.415177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.415196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.415409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.415427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.415664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.415696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.415919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.415937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.416202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.416246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.416624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.416655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.416812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.416830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.417035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.417066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.417382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.417418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.417721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.417753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.417988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.418018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.418193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.418224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.418518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.418549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.418747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.418779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.419097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.419127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.419418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.419448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.419739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.419770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.420007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.420038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.420201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.420219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.420452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.420471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.420763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.266 [2024-07-25 12:16:57.420794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.266 qpair failed and we were unable to recover it. 00:30:20.266 [2024-07-25 12:16:57.421033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.421064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.421413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.421432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.421713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.421745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.421974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.422006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.422324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.422355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.422587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.422626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.422873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.422904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.423052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.423083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.423378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.423408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.423576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.423623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.423947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.423977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.424272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.424302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.424469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.424500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.424796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.424828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.425125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.425156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.425479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.425510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.425745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.425764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.426002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.426032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.426245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.426276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.426516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.426547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.426865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.426885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.427055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.427073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.427229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.427248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.427520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.427551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.427806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.427838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.428096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.428127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.428368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.428398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.428627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.428666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.428882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.428900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.429090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.429109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.429425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.429456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.429776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.429808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.430043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.430074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.430383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.430414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.430723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.430755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.431003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.431034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.267 [2024-07-25 12:16:57.431203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.267 [2024-07-25 12:16:57.431235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.267 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.431474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.431505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.431821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.431853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.432096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.432127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.432412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.432442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.432692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.432723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.433064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.433099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.433327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.433356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.433686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.433718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.433896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.433926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.434158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.434189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.434360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.434378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.434513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.434532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.434738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.434769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.435011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.435042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.435305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.435336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.435572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.435619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.435913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.435944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.436131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.436162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.436336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.436367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.436593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.436633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.436879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.436909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.437138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.437157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.437364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.437577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.437596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.437889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.437920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.438173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.438203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.438499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.438529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.438708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.438740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.439080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.439113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.439454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.439485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.439735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.439772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.440108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.440139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.440319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.440349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.440530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.440561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.440915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.440947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.441239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.268 [2024-07-25 12:16:57.441269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.268 qpair failed and we were unable to recover it. 00:30:20.268 [2024-07-25 12:16:57.441525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.441555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.441908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.441939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.442254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.442273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.442418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.442436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.442646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.442677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.442846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.442877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.443106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.443137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.443382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.443400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.443698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.443730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.443965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.443995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.444225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.444244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.444466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.444497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.444824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.444855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.445106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.445137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.445292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.445323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.445667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.445698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.445955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.445973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.446257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.446288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.446454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.446484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.446727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.446759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.447006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.447036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.447219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.447250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.447565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.447596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.447838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.447869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.448099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.448117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.448239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.448258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.448525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.448556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.448744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.448776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.449000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.449030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.449260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.449290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.449579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.449617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.449860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.449889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.450088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.450107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.450264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.450282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.450550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.450580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.269 [2024-07-25 12:16:57.450844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.269 [2024-07-25 12:16:57.450875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.269 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.451111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.451141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.451457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.451488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.451666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.451698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.451936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.451967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.452125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.452144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.452435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.452465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.452728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.452759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.452890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.452908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.453231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.453261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.453496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.453526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.453679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.453709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.453947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.453978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.454285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.454316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.454538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.454569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.454831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.454862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.455096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.455125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.455359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.455389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.455627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.455659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.456011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.456041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.456269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.456300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.456597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.456636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.456953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.456983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.457307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.457337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.457536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.457566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.457829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.457848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.458043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.458064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.458362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.458393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.458571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.458610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.458922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.458954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.459137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.459168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.459338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.459357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.459592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.459623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.459917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.459947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.460130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.460160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.460415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.460446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.460734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.460766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.270 qpair failed and we were unable to recover it. 00:30:20.270 [2024-07-25 12:16:57.461003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.270 [2024-07-25 12:16:57.461034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.461371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.461401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.461714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.461746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.462065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.462096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.462317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.462347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.462635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.462666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.462894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.462925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.463159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.463190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.463412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.463430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.463599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.463624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.463782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.463801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.463958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.463976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.464179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.464210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.464381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.464412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.464670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.464702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.464941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.464959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.465154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.465173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.465314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.465332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.465547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.465577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.465829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.465860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.466138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.466168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.466460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.466491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.466783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.466815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.466969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.466999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.467271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.467290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.467529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.467547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.467764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.467783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.467994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.468012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.468223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.468242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.468435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.271 [2024-07-25 12:16:57.468457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.271 qpair failed and we were unable to recover it. 00:30:20.271 [2024-07-25 12:16:57.468720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.468765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.468948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.468979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.469300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.469330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.469590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.469629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.469944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.469974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.470152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.470183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.470352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.470382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.470757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.470788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.471081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.471112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.471400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.471432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.471754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.471786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.472086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.472117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.472377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.472407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.472658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.472690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.472922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.472953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.473172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.473202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.473340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.473371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.473659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.473690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.473908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.473938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.474206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.474237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.474409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.474439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.474637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.474668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.474945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.474964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.475177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.475195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.475431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.475449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.475654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.475673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.475999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.476030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.476269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.476299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.476591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.476634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.476948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.476978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.477198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.477228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.477453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.477484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.477665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.477697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.477925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.477944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.478208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.478250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.478415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.478445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.272 qpair failed and we were unable to recover it. 00:30:20.272 [2024-07-25 12:16:57.478737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.272 [2024-07-25 12:16:57.478768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.479057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.479087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.479404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.479434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.479742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.479779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.480094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.480113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.480323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.480342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.480644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.480676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.480905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.480935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.481193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.481223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.481397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.481428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.481714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.481745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.481903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.481934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.482256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.482286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.482610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.482642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.482921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.482951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.483172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.483202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.483448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.483479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.483738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.483769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.484047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.484077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.484363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.484381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.484584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.484608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.484872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.484915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.485150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.485181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.485359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.485389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.485708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.485743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.485974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.486005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.486374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.486443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.486697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.486735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.486991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.487023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.487208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.487238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.487405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.487444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.487771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.487803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.488124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.488154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.488321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.488351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.488579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.488617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.488951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.488981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.489206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.489224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.273 qpair failed and we were unable to recover it. 00:30:20.273 [2024-07-25 12:16:57.489531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.273 [2024-07-25 12:16:57.489549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.489728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.489747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.490010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.490028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.490240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.490270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.490500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.490530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.490694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.490725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.490951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.490991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.491116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.491147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.491448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.491466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.491668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.491688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.491916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.491934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.492142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.492161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.492367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.492386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.492589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.492614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.492878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.492896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.493123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.493141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.493417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.493447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.493684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.493717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.493936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.493955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.494178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.494208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.494556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.494587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.494888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.494919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.495146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.495164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.495376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.495394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.495591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.495615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.495847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.495866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.496059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.496077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.496375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.496405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.496588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.496627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.496792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.496823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.497090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.497120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.497440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.497470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.497761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.497793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.497986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.498016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.498257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.498288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.498460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.498490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.498720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.498752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.274 [2024-07-25 12:16:57.498928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.274 [2024-07-25 12:16:57.498959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.274 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.499254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.499284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.499506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.499536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.499854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.499897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.500128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.500146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.500282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.500300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.500516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.500546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.500809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.500841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.501069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.501099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.501384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.501420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.501662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.501694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.502009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.502039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.502198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.502229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.502490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.502520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.502811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.502843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.503085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.503104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.503241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.503259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.503472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.503491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.503647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.503666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.503823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.503841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.504062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.504080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.504235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.504253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.504458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.504476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.504710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.504729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.504934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.504964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.505260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.505291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.505488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.505519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.505836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.505868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.506120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.506138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.506347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.506366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.506570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.506588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.506827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.506858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.507134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.507164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.507399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.507417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.507552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.507583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.507776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.507807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.508104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.275 [2024-07-25 12:16:57.508136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.275 qpair failed and we were unable to recover it. 00:30:20.275 [2024-07-25 12:16:57.508434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.508464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.508625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.508657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.508808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.508838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.509100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.509130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.509361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.509380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.509574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.509592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.509857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.509876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.510084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.510102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.510300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.510330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.510505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.510535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.510789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.510821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.511111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.511141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.511460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.511496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.511744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.511776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.512012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.512042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.512303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.512334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.512648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.512680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.512917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.512948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.513249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.513285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.513550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.513582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.513880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.513911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.514145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.514164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.514378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.514397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.514589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.514613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.514844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.514863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.515155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.515174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.515318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.515337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.515549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.515567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.515871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.515890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.516009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.516028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.516263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.516281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.276 qpair failed and we were unable to recover it. 00:30:20.276 [2024-07-25 12:16:57.516544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.276 [2024-07-25 12:16:57.516562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.516775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.516795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.516997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.517015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.517232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.517250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.517469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.517487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.517779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.517798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.517954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.517973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.518175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.518193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.518342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.518360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.518587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.518641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.518794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.518824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.518986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.519016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.519333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.519364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.277 [2024-07-25 12:16:57.519680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.277 [2024-07-25 12:16:57.519728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.277 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.519963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.519994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.520333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.520354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.520512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.520530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.520737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.520756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.520969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.520988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.521197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.521227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.521458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.521488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.521713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.521749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.521924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.521954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.556 qpair failed and we were unable to recover it. 00:30:20.556 [2024-07-25 12:16:57.522105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.556 [2024-07-25 12:16:57.522135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.522377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.522407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.522639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.522671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.522899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.522929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.523308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.523339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.523524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.523555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.523853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.523884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.524102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.524133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.524372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.524412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.524646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.524665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.524788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.524807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.525125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.525143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.525390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.525409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.525678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.525697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.525845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.525864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.526028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.526048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.526195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.526214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.526427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.526445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.526669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.526688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.526979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.526997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.527146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.527165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.527376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.527395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.527641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.527660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.527807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.527825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.528098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.528117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.528289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.528308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.528514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.528533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.528803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.528823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.529027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.529046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.529209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.529227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.529360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.529379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.529586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.529610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.529832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.529850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.557 qpair failed and we were unable to recover it. 00:30:20.557 [2024-07-25 12:16:57.530141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.557 [2024-07-25 12:16:57.530159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.530352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.530371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.530521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.530539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.530750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.530769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.530877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.530895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.531103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.531125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.531323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.531341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.531493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.531512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.531671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.531690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.531846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.531865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.532136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.532154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.532371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.532390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.532619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.532638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.532955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.532973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.533182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.533200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.533395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.533413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.533666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.533685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.533808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.533827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.533955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.533974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.534128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.534146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.534407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.534426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.534698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.534717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.534851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.534869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.535163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.535182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.535420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.535439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.535709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.535733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.536001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.536020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.536164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.536182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.536410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.536428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.536552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.536571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.536776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.536795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.537057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.537076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.537288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.537306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.537536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.537554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.537844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.558 [2024-07-25 12:16:57.537863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.558 qpair failed and we were unable to recover it. 00:30:20.558 [2024-07-25 12:16:57.538080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.538098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.538391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.538409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.538627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.538646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.538856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.538874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.539010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.539029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.539298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.539316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.539445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.539463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.539665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.539684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.539949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.539992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.540213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.540243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.540473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.540514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.540813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.540833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.541037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.541055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.541259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.541278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.541412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.541430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.541712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.541731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.541925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.541955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.542129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.542159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.542421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.542452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.542686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.542717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.542977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.543007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.543183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.543213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.543457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.543487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.543816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.543847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.544142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.544172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.544422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.544453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.544703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.544733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.544910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.544940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.545174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.545193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.545416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.545434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.545658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.545677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.545874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.545892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.546121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.546139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.546348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.546366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.546519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.546537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.546691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.546710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.559 [2024-07-25 12:16:57.546843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.559 [2024-07-25 12:16:57.546861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.559 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.547100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.547120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.547269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.547287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.547496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.547753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.547785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.547947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.547977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.548125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.548144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.548409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.548439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.548678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.548709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.548973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.549004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.549168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.549198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.549486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.549505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.549698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.549717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.549879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.549897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.550171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.550206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.550372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.550403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.550643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.550675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.550907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.550938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.551149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.551180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.551418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.551449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.551685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.551716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.551971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.552001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.552316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.552346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.552633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.552652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.552918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.552936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.553086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.553105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.553313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.553354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.553646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.553678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.553975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.554005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.554238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.554269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.554499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.554530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.554756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.554788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.555067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.555097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.555321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.555351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.555509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.555540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.555765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.555796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.556026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.556056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.560 [2024-07-25 12:16:57.556221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.560 [2024-07-25 12:16:57.556251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.560 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.556416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.556434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.556732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.556763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.557019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.557050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.557346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.557417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.557695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.557734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.557909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.557941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.558101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.558131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.558273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.558294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.558542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.558560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.558773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.558793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.559059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.559077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.559270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.559301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.559537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.559567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.559765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.559797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.560031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.560062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.560230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.560259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.560434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.560465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.560613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.560645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.560888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.560919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.561069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.561099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.561341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.561372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.561617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.561649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.561835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.561865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.562094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.562124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.562372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.562402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.562617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.562649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.562818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.562849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.563024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.563055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.563291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.561 [2024-07-25 12:16:57.563321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.561 qpair failed and we were unable to recover it. 00:30:20.561 [2024-07-25 12:16:57.563557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.563589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.563777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.563809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.564051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.564081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.564314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.564345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.564563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.564594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.564831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.564863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.565189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.565220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.565451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.565482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.565651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.565683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.565946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.565977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.566209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.566239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.566499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.566530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.566785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.566817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.566981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.567012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.567353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.567390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.567679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.567711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.567947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.567978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.568270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.568307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.568598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.568639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.568999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.569031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.569257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.569288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.569515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.569546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.569724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.569756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.569987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.570018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.570254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.570284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.570522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.570553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.570810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.570843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.571017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.571047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.571218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.571248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.571552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.571583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.571772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.571804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.572037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.572068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.572314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.572344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.572576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.572618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.572844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.572862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.573068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.573086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.573337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.573368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.562 qpair failed and we were unable to recover it. 00:30:20.562 [2024-07-25 12:16:57.573536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.562 [2024-07-25 12:16:57.573566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.573761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.573794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.574096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.574127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.574402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.574433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.574601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.574644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.574934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.574965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.575244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.575275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.575504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.575535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.575755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.575787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.576021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.576052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.576219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.576237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.576462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.576494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.576723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.576755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.577010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.577041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.577308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.577339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.577587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.577626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.577823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.577855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.578146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.578182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.578345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.578375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.578621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.578653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.578979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.579010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.579142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.579161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.579364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.579394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.579562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.579601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.579799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.579838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.580038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.580080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.580396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.580426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.580573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.580595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.580832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.580858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.581104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.581125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.581277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.581295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.581593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.581635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.581902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.581920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.582151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.582169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.582400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.582418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.582624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.582644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.582849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.582867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.563 [2024-07-25 12:16:57.583065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.563 [2024-07-25 12:16:57.583083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.563 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.583280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.583298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.583492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.583510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.583656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.583675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.583812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.583830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.584044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.584062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.584288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.584306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.584511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.584530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.584752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.584772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.584971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.584989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.585279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.585298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.585511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.585529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.585687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.585706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.585972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.585995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.586205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.586224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.586416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.586434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.586700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.586719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.587010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.587028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.587173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.587191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.587322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.587341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.587531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.587553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.587751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.587770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.587978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.587996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.588306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.588324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.588530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.588548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.588683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.588703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.588946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.588964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.589157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.589175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.589370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.589389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.589559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.589577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.589776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.589796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.589943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.589961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.590188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.590206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.590468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.590487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.590715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.590735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.590965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.590983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.591175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.591194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.564 [2024-07-25 12:16:57.591332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.564 [2024-07-25 12:16:57.591351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.564 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.591489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.591508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.591633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.591653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.591878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.591896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.592044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.592062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.592193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.592211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.592421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.592440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.592580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.592599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.592821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.592840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.593103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.593122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.593339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.593358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.593503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.593521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.593735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.593964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.593982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.594185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.594204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.594485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.594509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.594649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.594670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.594817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.594836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.595113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.595131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.595402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.595421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.595641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.595661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.595860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.595878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.596089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.596107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.596366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.596388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.596654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.596674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.596817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.596836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.597130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.597149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.597285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.597304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.597565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.597584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.597776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bfe80 is same with the state(5) to be set 00:30:20.565 [2024-07-25 12:16:57.598168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.598237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.598524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.598562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.598862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.598895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.599128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.599159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.599390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.599420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.599648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.599679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.599894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.599915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.600180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.600202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.565 [2024-07-25 12:16:57.600395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.565 [2024-07-25 12:16:57.600413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.565 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.600615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.600635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.600831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.600850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.601005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.601024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.601309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.601327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.601520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.601539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.601675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.601694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.601895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.601913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.602052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.602070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.602289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.602307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.602502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.602521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.602749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.602768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.602978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.602996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.603140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.603159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.603448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.603466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.603731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.603750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.603893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.603912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.604115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.604134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.604340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.604359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.604479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.604497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.604705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.604725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.605017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.605036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.605161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.605179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.605293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.605311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.605537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.605556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.605702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.605721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.605963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.605982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.606189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.606208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.606407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.606426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.606580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.606598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.606815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.566 [2024-07-25 12:16:57.606833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.566 qpair failed and we were unable to recover it. 00:30:20.566 [2024-07-25 12:16:57.607099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.607117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.607409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.607427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.607631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.607651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.607946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.607965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.608259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.608277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.608427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.608446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.608637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.608656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.608852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.608870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.609022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.609043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.609253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.609295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.609452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.609483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.609819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.609851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.610140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.610171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.610394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.610436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.610752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.610772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.610983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.611013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.611331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.611362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.611626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.611659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.611901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.611932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.612226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.612257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.612550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.612569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.612896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.612937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.613233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.613265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.613428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.613459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.613698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.613717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.613859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.613878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.614093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.614124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.614296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.614327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.614486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.614505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.614699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.614719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.614926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.614945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.567 [2024-07-25 12:16:57.615253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.567 [2024-07-25 12:16:57.615272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.567 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.615463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.615481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.615775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.615795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.615951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.615969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.616178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.616196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.616411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.616429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.616759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.616804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.616992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.617022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.617241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.617272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.617496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.617515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.617764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.617783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.617923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.617941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.618148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.618166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.618377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.618396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.618539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.618557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.618791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.618823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.618998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.619028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.619193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.619229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.619465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.619483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.619681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.619701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.619837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.619856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.620069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.620099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.620365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.620396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.620686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.620718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.622340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.622377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.622664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.622698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.623026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.623057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.623285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.623304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.623437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.623456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.623667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.623695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.623887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.623905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.624212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.624230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.624507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.624525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.624714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.624733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.624874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.624893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.625119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.625138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.568 qpair failed and we were unable to recover it. 00:30:20.568 [2024-07-25 12:16:57.625269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.568 [2024-07-25 12:16:57.625287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.625445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.625463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.626921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.626953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.627254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.627274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.628714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.628762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.629005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.629024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.629233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.629251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.629551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.629582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.629846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.629878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.630112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.630143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.630399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.630429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.630654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.630673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.630870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.630888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.631088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.631106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.631297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.631316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.631642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.631673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.631844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.631874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.632135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.632166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.632329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.632360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.632579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.632617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.632869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.632899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.633189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.633225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.633393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.633424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.633640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.633659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.633949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.633990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.634228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.634259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.634493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.634524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.634755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.634787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.635021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.635040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.635255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.635274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.635483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.635501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.635802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.635834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.569 [2024-07-25 12:16:57.636090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.569 [2024-07-25 12:16:57.636123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.569 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.636349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.636380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.636546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.636577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.636813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.636845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.637085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.637116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.637427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.637457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.637747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.637766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.637997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.638028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.638178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.638209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.638438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.638469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.638719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.638739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.638908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.638938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.639228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.639259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.639526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.639557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.639805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.639838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.640105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.640136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.640379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.640398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.640591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.640617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.640774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.640804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.640977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.641008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.641166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.641196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.641355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.641385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.641682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.641714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.642034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.642065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.642310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.642341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.642583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.642625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.642864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.642895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.643070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.643101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.643346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.643378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.643668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.643705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.644024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.644055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.644210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.644242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.644475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.644519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.644657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.644677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.644841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.644871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.645135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.645165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.645332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.570 [2024-07-25 12:16:57.645375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.570 qpair failed and we were unable to recover it. 00:30:20.570 [2024-07-25 12:16:57.645642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.645684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.645909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.645940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.646174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.646205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.646364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.646395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.646622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.646654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.646884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.646915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.647174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.647205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.647371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.647401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.647637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.647657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.647858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.647876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.648072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.648090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.648229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.648247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.648390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.648409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.648537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.648555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.648779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.648800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.649049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.649081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.649318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.649348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.649668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.649699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.649962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.649993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.650232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.650263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.650481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.650500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.650714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.650733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.650925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.650943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.651092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.651110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.651370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.651401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.651636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.651667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.571 [2024-07-25 12:16:57.651903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.571 [2024-07-25 12:16:57.651934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.571 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.652093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.652124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.652389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.652420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.652725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.652762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.652939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.652970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.653211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.653242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.653533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.653569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.653755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.653788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.654069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.654100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.654265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.654284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.654415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.654434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.654564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.654583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.654790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.654809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.654968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.654986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.655174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.655204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.655366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.655397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.655565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.655596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.655829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.655847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.656059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.656077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.656287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.656306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.656439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.656458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.656596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.656622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.656919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.656950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.657145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.657175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.657400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.657431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.657618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.657637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.657798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.657829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.658053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.658084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.658238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.658257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.658396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.658415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.658683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.658715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.658953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.658983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.659212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.659243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.659407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.659438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.659671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.659703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.659872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.659903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.660056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.660086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.572 [2024-07-25 12:16:57.660247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.572 [2024-07-25 12:16:57.660278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.572 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.660509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.660539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.660736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.660767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.660942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.660974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.661131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.661161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.661405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.661436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.661664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.661684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.661821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.661839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.661978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.662008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.662169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.662205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.662378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.662408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.662663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.662695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.662917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.662948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.663243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.663273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.663513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.663544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.663721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.663754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.663989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.664019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.664251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.664282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.664569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.664588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.664728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.664747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.664953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.664972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.665127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.665157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.665408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.665439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.665727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.665747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.665945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.665977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.666149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.666180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.666426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.666456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.666683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.666702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.666905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.666924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.667129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.667148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.667311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.667330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.667525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.667543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.667737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.667757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.667957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.667975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.668108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.573 [2024-07-25 12:16:57.668126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.573 qpair failed and we were unable to recover it. 00:30:20.573 [2024-07-25 12:16:57.668295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.668314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.668461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.668492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.668718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.668749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.669038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.669069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.669357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.669388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.669612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.669632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.669923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.669941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.670085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.670104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.670245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.670264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.670527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.670557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.670805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.670836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.671013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.671043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.671265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.671296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.671533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.671551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.671679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.671703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.671925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.671943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.672074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.672092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.672410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.672441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.672616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.672647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.672947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.672978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.673199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.673230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.673479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.673510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.673755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.673786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.674011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.674041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.674262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.674293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.674473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.674503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.674673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.674693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.674959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.674977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.675195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.675214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.675340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.675358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.675587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.675629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.675853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.675884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.676065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.676095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.676271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.676289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.574 [2024-07-25 12:16:57.676413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-07-25 12:16:57.676432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.574 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.676750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.676781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.677059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.677090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.677367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.677398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.677595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.677634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.677817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.677836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.678070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.678100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.678286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.678317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.678538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.678557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.678700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.678720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.678929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.678960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.679250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.679281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.679442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.679472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.679639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.679671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.679931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.679962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.680215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.680245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.680478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.680496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.680682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.680701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.681053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.681084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.681324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.681355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.681526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.681562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.681744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.681764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.681974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.682003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.682222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.682253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.682545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.682576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.682894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.682925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.683102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.683132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.683365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.683395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.683576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.683616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.683765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.683795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.684029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.684059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.684387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.684417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.684595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.684637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.684901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.684919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.685191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.685235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.685413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.685444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.685685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-07-25 12:16:57.685717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.575 qpair failed and we were unable to recover it. 00:30:20.575 [2024-07-25 12:16:57.686007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.686052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.686227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.686257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.686501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.686532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.686699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.686719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.686861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.686894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.687058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.687089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.687332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.687363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.687516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.687546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.687738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.687771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.688003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.688034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.688214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.688246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.688502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.688533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.688766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.688785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.688994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.689024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.689247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.689278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.689428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.689459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.689629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.689661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.689881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.689912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.690154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.690185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.690417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.690436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.690642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.690661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.690794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.690812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.691011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.691041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.691332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.691369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.691521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.691552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.691797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.691829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.692003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.692033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.692190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.692221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.692385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.692416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.576 [2024-07-25 12:16:57.692657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.576 [2024-07-25 12:16:57.692689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.576 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.692982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.693012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.693192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.693222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.693455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.693485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.693648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.693679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.693857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.693875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.694190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.694221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.694375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.694406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.694588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.694628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.694863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.694893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.695042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.695073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.695247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.695277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.695570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.695600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.695801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.695833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.696029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.696048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.696247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.696278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.696433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.696464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.696808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.696839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.697085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.697115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.697347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.697378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.697614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.697646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.697831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.697863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.698021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.698040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.698364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.698382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.698512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.698531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.698783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.698818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.698992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.699023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.699182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.699212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.699430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.699462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.699630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.699661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.699903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.699934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.700102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.700121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.700402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.700432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.700590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.700630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.700854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.700889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.701074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.701106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.701278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.577 [2024-07-25 12:16:57.701309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.577 qpair failed and we were unable to recover it. 00:30:20.577 [2024-07-25 12:16:57.701535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.701554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.701773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.701792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.702055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.702095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.702267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.702297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.702475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.702506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.702758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.702790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.703019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.703050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.703200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.703230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.703473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.703504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.703731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.703763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.703927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.703946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.704180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.704211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.704451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.704481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.704647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.704679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.704849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.704879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.705074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.705093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.705310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.705341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.705522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.705552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.705868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.705900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.706057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.706075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.706257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.706288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.706437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.706468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.706730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.706761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.707068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.707099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.707342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.707373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.707537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.707556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.707709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.707728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.707995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.708026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.708187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.708218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.708394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.708412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.708616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.708635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.708794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.708836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.709144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.709176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.709424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.709455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.709681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.709712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.709995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-07-25 12:16:57.710026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.578 qpair failed and we were unable to recover it. 00:30:20.578 [2024-07-25 12:16:57.710202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.710232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.710481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.710517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.710772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.710805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.710995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.711025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.711267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.711298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.711472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.711502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.711690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.711709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.711851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.711869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.712078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.712096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.712302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.712320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.712584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.712635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.712902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.712933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.713194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.713225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.713585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.713625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.713806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.713837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.714019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.714050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.714213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.714244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.714465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.714496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.714812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.714844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.715162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.715194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.715427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.715457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.715698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.715730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.715954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.715984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.716234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.716264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.716407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.716426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.716690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.716710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.716995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.717014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.717165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.717184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.717399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.717419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.717562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.717580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.717775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.717807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.718043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.718073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.718367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.718408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.718545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.718564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.718891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.718911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.579 [2024-07-25 12:16:57.719110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-07-25 12:16:57.719141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.579 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.719381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.719412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.719590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.719629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.719803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.719821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.720101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.720119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.720329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.720347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.720492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.720514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.720651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.720669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.720892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.720923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.721092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.721123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.721395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.721426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.721649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.721669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.721887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.721917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.722148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.722178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.722412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.722443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.722703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.722722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.723024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.723054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.723222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.723252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.723542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.723573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.723890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.723921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.724111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.724142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.724370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.724400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.724575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.724594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.724817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.724835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.724972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.725002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.725262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.725293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.725624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.725655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.725817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.725847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.726065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.726096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.726270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.726300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.726465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.726496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.726801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.726833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.727058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.727088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.727327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.727359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.727676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.727708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.728002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.728032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.728197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-07-25 12:16:57.728228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.580 qpair failed and we were unable to recover it. 00:30:20.580 [2024-07-25 12:16:57.728409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.728439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.728589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.728646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.728846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.728877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.729165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.729196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.729510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.729540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.729800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.729831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.729991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.730021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.730195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.730213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.730426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.730457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.730640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.730671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.730915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.730946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.731107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.731138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.731306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.731336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.731645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.731676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.731968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.731999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.732244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.732274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.732452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.732482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.732642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.732662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.732805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.732823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.733025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.733043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.733239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.733259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.733472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.733490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.733706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.733725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.733937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.733957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.734103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.734122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.734320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.734338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.734535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.734554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.734767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.734799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.735022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.735053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.735224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.735254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.735419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.735450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.581 qpair failed and we were unable to recover it. 00:30:20.581 [2024-07-25 12:16:57.735718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.581 [2024-07-25 12:16:57.735750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.735987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.736020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.736305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.736349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.736555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.736574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.736742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.736761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.736932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.736954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.737191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.737221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.737419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.737451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.737683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.737702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.737826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.737844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.738092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.738123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.738408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.738438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.738676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.738707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.738939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.738969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.739287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.739318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.739562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.739593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.739780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.739811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.739986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.740017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.740253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.740284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.740515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.740546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.740858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.740889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.741146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.741176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.741514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.741546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.741780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.741812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.741989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.742020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.742263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.742294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.742443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.742462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.742620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.742639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.742848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.742879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.743039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.743069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.743236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.743268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.743499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.743530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.743758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.743791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.744069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.744100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.744355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.744386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.744561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.744591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.744780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.744800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.582 qpair failed and we were unable to recover it. 00:30:20.582 [2024-07-25 12:16:57.745030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.582 [2024-07-25 12:16:57.745060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.745320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.745351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.745649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.745669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.745879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.745897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.746102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.746121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.746312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.746331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.746527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.746546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.746749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.746768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.746987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.747022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.747345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.747376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.747619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.747650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.747937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.747956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.748097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.748115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.748248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.748267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.748433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.748451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.748648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.748680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.748910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.748942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.749185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.749216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.749423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.749453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.749670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.749702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.749942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.749972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.750133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.750164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.750394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.750425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.750578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.750597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.750828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.750861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.751141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.751171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.751412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.751443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.751662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.751694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.751859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.751877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.752024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.752042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.752311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.752342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.752508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.752538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.752774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.752805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.753149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.753179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.753410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.753441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.753656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.753725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.753919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.583 [2024-07-25 12:16:57.753953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.583 qpair failed and we were unable to recover it. 00:30:20.583 [2024-07-25 12:16:57.754110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.754141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.754437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.754467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.754884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.754916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.755098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.755127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.755379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.755410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.755589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.755627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.755884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.755914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.756134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.756164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.756337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.756367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.756615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.756647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.756801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.756831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.756982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.757026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.757283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.757313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.757535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.757565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.757813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.757844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.758107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.758138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.758352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.758373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.758598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.758623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.758839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.758857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.759073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.759092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.759349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.759379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.759614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.759646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.759822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.759855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.760150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.760181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.760340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.760370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.760665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.760696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.760938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.760968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.761225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.761255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.761517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.761537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.761735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.761754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.761966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.761996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.762226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.762256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.762544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.762575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.762784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.762815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.763051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.763082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.763314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.763345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.584 [2024-07-25 12:16:57.763645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.584 [2024-07-25 12:16:57.763663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.584 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.763957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.763988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.764250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.764281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.764505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.764524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.765934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.765966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.766213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.766232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.766428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.766446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.766664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.766696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.766985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.767016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.767256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.767287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.767578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.767617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.767896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.767927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.768109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.768140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.768398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.768429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.768657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.768690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.768867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.768890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.769052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.769082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.769375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.769405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.769635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.769667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.769903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.769921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.770152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.770171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.770377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.770396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.770667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.770686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.770897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.770916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.771160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.771179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.771325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.771344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.771492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.771510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.771621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.771641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.771903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.771922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.772080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.772100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.772343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.772373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.772663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.772695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.585 [2024-07-25 12:16:57.772872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.585 [2024-07-25 12:16:57.772890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.585 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.773180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.773199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.773338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.773357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.773549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.773567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.773772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.773791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.773944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.773975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.774207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.774237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.774393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.774424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.774674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.774706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.775064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.775083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.775363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.775394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.775636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.775668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.775928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.775947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.776084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.776102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.776314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.776344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.776503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.776534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.776765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.776798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.776923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.776953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.777183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.777202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.777409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.777427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.777556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.777574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.777788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.777809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.777957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.777976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.778172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.778194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.778407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.778426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.778557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.778576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.778780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.778799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.778945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.778964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.779169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.779187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.779385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.779403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.779599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.779625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.779881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.779912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.780225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.780256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.780425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.780455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.780655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.780687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.780927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.780958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.781212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.781244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.586 qpair failed and we were unable to recover it. 00:30:20.586 [2024-07-25 12:16:57.781488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.586 [2024-07-25 12:16:57.781519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.781754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.781774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.781999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.782030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.782324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.782354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.782531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.782561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.782807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.782826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.782974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.783005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.783235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.783265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.783542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.783580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.783795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.783814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.783950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.783969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.784191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.784221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.784378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.784409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.784667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.784699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.784863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.784894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.785044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.785075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.785294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.785324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.785565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.785596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.785869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.785900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.786132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.786169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.786392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.786423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.786685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.786716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.787020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.787050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.787348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.787368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.787502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.787521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.787758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.787777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.787933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.787955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.788114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.788145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.788315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.788345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.788599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.788641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.788819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.788838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.789043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.789061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.789268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.789300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.789650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.789682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.789850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.789869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.790135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.790165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.790340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.790370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.790544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.587 [2024-07-25 12:16:57.790562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.587 qpair failed and we were unable to recover it. 00:30:20.587 [2024-07-25 12:16:57.790703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.790742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.791015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.791046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.791281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.791312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.791551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.791581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.791759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.791777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.791994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.792012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.792281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.792312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.792483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.792513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.792710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.792741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.792910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.792928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.793059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.793077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.793270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.793288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.793585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.793627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.793855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.793886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.794192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.794222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.794389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.794419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.794715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.794758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.794968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.794987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.795251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.795269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.795485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.795503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.795814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.795833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.796005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.796023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.796147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.796165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.796433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.796468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.796625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.796658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.796861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.796890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.797122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.797153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.797306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.797337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.797510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.797545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.797714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.797745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.797969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.797999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.798218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.798249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.798430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.798461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.798709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.798728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.798860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.798890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.799126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.799157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.799321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.799352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.588 qpair failed and we were unable to recover it. 00:30:20.588 [2024-07-25 12:16:57.799528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.588 [2024-07-25 12:16:57.799558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.799722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.799764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.799915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.799933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.800126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.800144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.800378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.800408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.800670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.800703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.800948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.800978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.801145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.801175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.801500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.801531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.801700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.801731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.801966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.801996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.802166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.802184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.802411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.802430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.802667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.802687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.802885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.802914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.803088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.803119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.803356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.803386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.803552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.803583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.803824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.803856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.804023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.804041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.804327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.804358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.804589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.804629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.804919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.804950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.805148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.805179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.805333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.805364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.805591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.805630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.805949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.805979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.806146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.806191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.806319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.806337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.806481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.806525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.806769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.806802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.807040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.807076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.807231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.807261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.807425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.807456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.807686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.807705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.807910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.807941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.808104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.808134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.589 [2024-07-25 12:16:57.808297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.589 [2024-07-25 12:16:57.808328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.589 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.808546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.808577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.808774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.808793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.809059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.809090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.809332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.809363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.809525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.809555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.809723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.809742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.809878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.809896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.810097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.810116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.810241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.810259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.810411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.810430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.810625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.810644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.810907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.810926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.811052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.811071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.811207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.811225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.811374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.811393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.811531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.811550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.811693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.811712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.811931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.811950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.812146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.812165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.812300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.812319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.812450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.812469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.812667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.812686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.812949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.812990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.813282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.813313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.813546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.813576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.813737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.813768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.813991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.814009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.814209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.814228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.814436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.814454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.814617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.814636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.814787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.814805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.815027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.815057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.590 [2024-07-25 12:16:57.815283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.590 [2024-07-25 12:16:57.815314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.590 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.815483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.815519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.815762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.815781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.816052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.816082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.816322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.816352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.816600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.816665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.816937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.816968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.817138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.817158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.817277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.817295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.817486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.817530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.817692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.817724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.817884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.817915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.818089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.818120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.818412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.818444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.818709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.818741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.818967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.818997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.819204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.819235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.819463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.819493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.819752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.819771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.819998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.820016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.820221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.820239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.820449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.820467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.820579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.820597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.820899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.820918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.821079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.821097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.821231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.821249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.821401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.821419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.821553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.821583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.822092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.822161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.822355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.822392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.822591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.822635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.822889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.822927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.823169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.823199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.823433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.823463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.823651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.823683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.823845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.823875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.824081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.824112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.824275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.591 [2024-07-25 12:16:57.824305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.591 qpair failed and we were unable to recover it. 00:30:20.591 [2024-07-25 12:16:57.824460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.824490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.824713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.824745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.824906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.824936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.825239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.825270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.825510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.825541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.825686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.825708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.825940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.825959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.826175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.826193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.826340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.826371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.826597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.826638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.826879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.826911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.827095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.827126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.827420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.827451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.827673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.827705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.827944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.827975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.828244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.828274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.828423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.828453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.828718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.828750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.828998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.829039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.829252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.829270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.829442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.829461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.829616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.829647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.829953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.829985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.830278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.830309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.830623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.830656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.830858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.830889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.831114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.831144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.831356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.831387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.831703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.831750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.831930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.831961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.832109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.832132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.832426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.832457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.832686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.832718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.832884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.832902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.833039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.833058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.833332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.833362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.592 [2024-07-25 12:16:57.833596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.592 [2024-07-25 12:16:57.833656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.592 qpair failed and we were unable to recover it. 00:30:20.593 [2024-07-25 12:16:57.833826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.593 [2024-07-25 12:16:57.833845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.593 qpair failed and we were unable to recover it. 00:30:20.593 [2024-07-25 12:16:57.834045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.593 [2024-07-25 12:16:57.834064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.593 qpair failed and we were unable to recover it. 00:30:20.593 [2024-07-25 12:16:57.834268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.593 [2024-07-25 12:16:57.834286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.593 qpair failed and we were unable to recover it. 00:30:20.593 [2024-07-25 12:16:57.835615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.593 [2024-07-25 12:16:57.835649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.593 qpair failed and we were unable to recover it. 00:30:20.593 [2024-07-25 12:16:57.835871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.593 [2024-07-25 12:16:57.835890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.593 qpair failed and we were unable to recover it. 00:30:20.593 [2024-07-25 12:16:57.836065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.593 [2024-07-25 12:16:57.836083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.593 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.836241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.836263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.836488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.836520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.836697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.836716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.836857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.836876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.837141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.837159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.837363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.837381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.837489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.837508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.837802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.837833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.839496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.839531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.839841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.839862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.840157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.840188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.840407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.840438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.840686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.840718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.840998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.841017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.880 [2024-07-25 12:16:57.841227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.880 [2024-07-25 12:16:57.841246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.880 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.841457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.841476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.841812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.841844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.842083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.842113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.842377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.842408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.842675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.842707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.842917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.842947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.843190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.843208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.843423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.843442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.843703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.843722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.843948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.843967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.844185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.844215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.844466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.844497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.844765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.844802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.845039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.845069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.845244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.845275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.845599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.845655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.845859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.845890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.846136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.846167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.846456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.846475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.846785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.846817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.847087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.847118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.847359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.847390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.847556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.847587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.847886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.847917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.848158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.848176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.848390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.848409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.848622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.848642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.848958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.848989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.849279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.849310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.849500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.849532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.849709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.849740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.850054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.850073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.850300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.850319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.850441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.850460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.850754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.850775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.851008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.851027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.881 [2024-07-25 12:16:57.851245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.881 [2024-07-25 12:16:57.851263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.881 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.851501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.851520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.851817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.851849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.852149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.852218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.852462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.852495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.852722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.852755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.853027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.853057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.853314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.853344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.853582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.853625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.853791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.853813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.854047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.854078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.854305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.854335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.854562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.854593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.854825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.854856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.855128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.855159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.855494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.855524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.855755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.855787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.855974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.856004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.856227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.856246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.856448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.856466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.856669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.856689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.856880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.856899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.857047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.857065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.857221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.857252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.857478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.857510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.857795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.857814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.857964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.857994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.858263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.858293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.858562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.858593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.858894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.858926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.859163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.859194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.859472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.859503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.859801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.859833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.860163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.860195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.860492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.860523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.860701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.860735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.860981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.861000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.861215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.882 [2024-07-25 12:16:57.861234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.882 qpair failed and we were unable to recover it. 00:30:20.882 [2024-07-25 12:16:57.861462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.861493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.861722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.861753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.861992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.862023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.862253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.862284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.862458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.862489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.862790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.862828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.863073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.863104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.863343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.863375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.863620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.863652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.863893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.863924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.864086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.864105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.864244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.864275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.864504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.864534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.864678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.864710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.864931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.864962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.865272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.865304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.865523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.865553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.865856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.865889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.866125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.866156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.866312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.866331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.866470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.866488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.866686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.866705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.866913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.866932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.867065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.867083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.867288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.867319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.867550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.867580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.867746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.867765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.867960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.867991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.868223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.868253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.868473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.868504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.868762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.868794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.869045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.869076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.869405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.869436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.869665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.869696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.869879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.869910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.870209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.870228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.870366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.870384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.883 qpair failed and we were unable to recover it. 00:30:20.883 [2024-07-25 12:16:57.870551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.883 [2024-07-25 12:16:57.870582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.870911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.870942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.871259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.871290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.871509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.871540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.871707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.871738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.871907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.871938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.872227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.872258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.872569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.872599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.872789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.872826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.873089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.873120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.873376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.873407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.873573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.873616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.873854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.873873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.874104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.874122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.874267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.874285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.875587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.875629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.875966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.875986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.876202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.876234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.876496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.876527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.876793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.876825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.876988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.877007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.877155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.877174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.877403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.877421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.877630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.877649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.877784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.877802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.878016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.878046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.878212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.878243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.878421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.878452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.878619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.878651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.878899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.878917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.879061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.879091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.879349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.879380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.879629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.879661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.879829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.879860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.880099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.880130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.880350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.880369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.884 qpair failed and we were unable to recover it. 00:30:20.884 [2024-07-25 12:16:57.880571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.884 [2024-07-25 12:16:57.880615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.880784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.880815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.881122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.881154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.881315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.881346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.881587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.881628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.881918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.881952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.882263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.882294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.882552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.882583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.882825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.882857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.883096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.883135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.883369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.883388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.883520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.883539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.883689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.883711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.883907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.883926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.884229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.884248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.884442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.884460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.884724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.884743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.884952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.884971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.885110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.885128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.885323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.885342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.885463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.885482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.885634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.885665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.885845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.885876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.886096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.886127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.886273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.886296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.886514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.886533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.886677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.886698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.886911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.886943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.887173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.887204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.885 [2024-07-25 12:16:57.887430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.885 [2024-07-25 12:16:57.887462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.885 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.887640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.887672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.887902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.887921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.888180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.888199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.888332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.888351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.888551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.888570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.888892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.888931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.889170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.889201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.889375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.889407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.889643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.889675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.889850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.889881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.890021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.890051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.890215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.890246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.890507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.890525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.890673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.890692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.890836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.890866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.891106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.891136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.891339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.891369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.891623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.891655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.891846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.891877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.892023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.892054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.892292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.892322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.892489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.892520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.892751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.892790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.893035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.893066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.893220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.893239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.893446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.893465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.893601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.893626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.893781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.893816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.894108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.894139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.894297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.894328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.894490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.894522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.894867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.894899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.895139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.895170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.895347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.895378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.895543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.895574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.895749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.895782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.895947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.886 [2024-07-25 12:16:57.895978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.886 qpair failed and we were unable to recover it. 00:30:20.886 [2024-07-25 12:16:57.896154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.896184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.896351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.896381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.896623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.896656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.896881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.896912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.897207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.897251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.897472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.897491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.897704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.897723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.897863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.897882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.898098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.898128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.898296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.898327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.898486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.898517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.898700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.898732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.898904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.898935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.899162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.899193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.899370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.899401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.899644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.899663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.899796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.899815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.900108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.900127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.900351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.900381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.900570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.900600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.900771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.900803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.901027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.901058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.901296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.901327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.901503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.901534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.901827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.901860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.902155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.902199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.902408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.902426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.902565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.902584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.902809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.902828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.902975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.903005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.903176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.903207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.903373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.903403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.903688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.903720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.903948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.903967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.904181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.904199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.904346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.904364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.904581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.904621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.887 [2024-07-25 12:16:57.904869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.887 [2024-07-25 12:16:57.904900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.887 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.905059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.905090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.905320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.905351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.905685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.905717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.905957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.905987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.906207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.906238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.906477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.906508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.906727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.906759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.906937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.906967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.907253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.907284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.907453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.907471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.907611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.907630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.907870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.907900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.908127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.908145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.908285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.908303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.908579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.908598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.908758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.908776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.909068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.909087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.909222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.909240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.909451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.909469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.909613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.909632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.909842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.909871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.910095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.910125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.910296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.910327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.910663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.910695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.910945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.910976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.911203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.911234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.911525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.911555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.911908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.911945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.912111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.912141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.912418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.912449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.912689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.912720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.912896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.912915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.913140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.913170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.913335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.913366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.913658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.913689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.913930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.913960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.914226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.914257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.888 [2024-07-25 12:16:57.914385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.888 [2024-07-25 12:16:57.914415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.888 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.914593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.914634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.914867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.914897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.915156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.915174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.915374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.915393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.915612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.915632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.915838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.915856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.916045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.916075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.916364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.916394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.916567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.916598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.916791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.916823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.917054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.917084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.917371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.917411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.917564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.917595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.917774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.917805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.917999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.918030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.918266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.918297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.918572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.918616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.918869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.918900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.919122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.919151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.919386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.919405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.919573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.919591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.919814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.919833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.920042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.920060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.920347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.920365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.920499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.920517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.920654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.920673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.920863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.920903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.921073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.921103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.921228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.921259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.889 [2024-07-25 12:16:57.921478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.889 [2024-07-25 12:16:57.921519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.889 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.921747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.921779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.922095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.922126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.922289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.922320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.922488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.922519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.922807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.922838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.923002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.923033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.923284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.923314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.923533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.923551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.923695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.923714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.923855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.923874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.924199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.924229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.924410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.924441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.924785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.924817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.924996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.925027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.925246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.925277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.925477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.925495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.925655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.925686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.925940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.925971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.926285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.926304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.926506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.926525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.926805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.926824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.926971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.926988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.927146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.927176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.927361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.927392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.927562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.927593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.927774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.927805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.927990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.928009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.928134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.928153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.928363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.928382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.928575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.928593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.890 qpair failed and we were unable to recover it. 00:30:20.890 [2024-07-25 12:16:57.928850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.890 [2024-07-25 12:16:57.928881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.929117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.929147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.929310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.929340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.929518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.929548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.929724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.929755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.930043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.930074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.930371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.930402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.930559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.930589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.930771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.930801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.930955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.930992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.931149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.931179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.931331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.931362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.931534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.931564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.931883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.931914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.932074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.932105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.932267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.932297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.932514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.932544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.932727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.932759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.932984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.933014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.933264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.933294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.933517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.933549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.933725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.933758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.933986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.934017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.934201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.934220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.934359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.934377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.934646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.934665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.934858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.934876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.935007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.935025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.935243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.935274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.935527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.935558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.935861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.935892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.936078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.936109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.936267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.936299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.936476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.936509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.936670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.936702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.937015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.937046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.937203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.937234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.891 qpair failed and we were unable to recover it. 00:30:20.891 [2024-07-25 12:16:57.937547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.891 [2024-07-25 12:16:57.937578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.937740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.937770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.938004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.938022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.938183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.938213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.938433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.938463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.938734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.938766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.938991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.939021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.939263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.939294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.939463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.939493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.939659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.939691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.939860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.939891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.940138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.940169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.940321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.940356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.940498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.940529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.940815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.940847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.941017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.941047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.941277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.941296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.941508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.941526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.941658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.941677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.941809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.941827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.942142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.942172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.942405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.942436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.942618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.942650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.942951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.942982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.943129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.943160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.943370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.943402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.943712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.943744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.943965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.943996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.944163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.944182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.944391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.944409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.944616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.944634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.944858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.944877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.945086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.945117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.945349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.945379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.945526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.945556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.945749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.945768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.945983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.946014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.946256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.892 [2024-07-25 12:16:57.946286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.892 qpair failed and we were unable to recover it. 00:30:20.892 [2024-07-25 12:16:57.946453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.946483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.946724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.946744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.946893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.946911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.947053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.947072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.947282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.947313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.947486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.947517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.947749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.947781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.948069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.948099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.948333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.948364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.948594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.948634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.948830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.948860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.949089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.949119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.949278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.949308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.949483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.949513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.949676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.949712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.949881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.949912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.950075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.950105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.950335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.950366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.950655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.950688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.950863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.950893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.951070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.951089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.951232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.951251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.951389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.951407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.951618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.951638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.951830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.951849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.952042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.952060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.952375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.952394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.952658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.952677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.952842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.952861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.952982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.953017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.953180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.953211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.953490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.953520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.953743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.953773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.953934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.953965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.954123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.954154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.954320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.954350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.954531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.954562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-25 12:16:57.954741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.893 [2024-07-25 12:16:57.954772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.955001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.955031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.955251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.955281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.955544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.955574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Read completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 Write completed with error (sct=0, sc=8) 00:30:20.894 starting I/O failed 00:30:20.894 [2024-07-25 12:16:57.956216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.894 [2024-07-25 12:16:57.956521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.956562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.956812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.956846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.957075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.957097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.957337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.957355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.957552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.957570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.957725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.957745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.957900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.957930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.958090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.958121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.958387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.958418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.958657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.958689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.958853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.958884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.959034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.959065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.959240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.959271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.959441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.959472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.959697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.959728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.959885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.959925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.960087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.960118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-25 12:16:57.960371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.894 [2024-07-25 12:16:57.960389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.960600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.960643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.960840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.960870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.961052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.961083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.961259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.961290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.961516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.961534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.961682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.961701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.961843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.961861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.962066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.962096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.962325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.962356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.962529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.962560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.962734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.962766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.962936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.962966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.963132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.963162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.963328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.963359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.963580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.963622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.963798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.963834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.964060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.964091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.964246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.964264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.964411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.964430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.964561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.964579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.964881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.964900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.965033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.965051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.965164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.965196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.965417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.965448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.965613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.965645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.965808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.965838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.966017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.966048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.966279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.966309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.966531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.966550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.966695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.966727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.966880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.966910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.967163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.967193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.967420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.967438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.967588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.967612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.967835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.967865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.968089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.968120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.968315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.968347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-25 12:16:57.968507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.895 [2024-07-25 12:16:57.968526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.968670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.968689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.968818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.968836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.968978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.968997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.969145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.969164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.969369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.969387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.969592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.969617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.969761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.969780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.969983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.970002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.970216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.970246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.970466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.970496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.970674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.970706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.971032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.971062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.971279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.971310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.971485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.971516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.971672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.971704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.971943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.971974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.972200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.972231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.972399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.972420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.972620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.972639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.972863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.972882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.973022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.973040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.973205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.973244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.973407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.973437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.973778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.973809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.973977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.974008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.974242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.974273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.974450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.974469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.974683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.974716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.974945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.974976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.975209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.975239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.975497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.975528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.975781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.975813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.976036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.976066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.976308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.976340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.976585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.976609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.976822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.976840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.976967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.896 [2024-07-25 12:16:57.976986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.896 qpair failed and we were unable to recover it. 00:30:20.896 [2024-07-25 12:16:57.977205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.977235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.977400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.977430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.977589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.977629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.977800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.977830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.977998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.978029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.978252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.978284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.978515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.978534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.978826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.978858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.979075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.979106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.979340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.979371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.979520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.979539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.979669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.979688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.979905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.979935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.980106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.980137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.980368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.980399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.980690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.980759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.981014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.981048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.981284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.981315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.981536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.981566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.981747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.981780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.981945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.981984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.982204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.982235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.982388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.982418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.982658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.982690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.982863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.982894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.983134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.983165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.983390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.983421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.983666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.983687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.983826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.983844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.984053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.984084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.984254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.984284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.984513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.984555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.984773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.984791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.984936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.984955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.985103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.985133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.985385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.985415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.897 [2024-07-25 12:16:57.985594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.897 [2024-07-25 12:16:57.985636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.897 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.985856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.985886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.986109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.986139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.986293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.986311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.986469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.986490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.986632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.986652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.986782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.986801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.987847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.987881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.988027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.988046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.988179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.988198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.988488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.988519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.988763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.988796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.989020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.989051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.989341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.989371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.989613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.989644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.989903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.989934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.990089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.990107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.990262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.990293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.990531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.990562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.990923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.990955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.991109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.991139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.991325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.991356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.991507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.991526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.991660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.991679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.991825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.991847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.991984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.992002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.992218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.992237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.992373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.992413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.992662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.992694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.992937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.992968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.993193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.993223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.993454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.993472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.993682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.993714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.994043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.994073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.994301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.994320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.994527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.994557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.994740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.994771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.994952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.994982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.898 [2024-07-25 12:16:57.995219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.898 [2024-07-25 12:16:57.995250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.898 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.995495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.995527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.995786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.995818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.996084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.996115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.996375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.996406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.996567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.996598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.996902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.996933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.997110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.997140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.997418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.997449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.997696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.997716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.997944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.997963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.998207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.998225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.998370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.998389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.998618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.998650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.998815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.998845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.999016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.999047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.999432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.999464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.999708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.999740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:57.999962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:57.999992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.000231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.000262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.000415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.000433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.000646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.000678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.000914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.000945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.001098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.001129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.001351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.001382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.001619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.001651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.001884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.001921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.002225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.002255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.002509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.002541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.002702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.002733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.002908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.002938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.899 qpair failed and we were unable to recover it. 00:30:20.899 [2024-07-25 12:16:58.003226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.899 [2024-07-25 12:16:58.003257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.003428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.003458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.003728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.003748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.003888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.003906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.004105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.004124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.004326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.004345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.004562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.004592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.004834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.004866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.005095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.005126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.005302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.005334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.005555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.005586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.005761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.005793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.006017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.006047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.006268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.006299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.006538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.006556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.006705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.006724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.006883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.006913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.007153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.007184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.007357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.007398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.007532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.007551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.007748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.007769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.007911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.007930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.008063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.008082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.008227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.008245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.008393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.008423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.008648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.008680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.008905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.008936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.009096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.009127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.009292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.009323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.009453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.009471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.009593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.009618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.009826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.009857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.010090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.010121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.010308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.010348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.010582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.010601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.010823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.010845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.011048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.011067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.011263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.900 [2024-07-25 12:16:58.011281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.900 qpair failed and we were unable to recover it. 00:30:20.900 [2024-07-25 12:16:58.011483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.011502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.011659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.011678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.011833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.011862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.012114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.012144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.012386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.012417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.012571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.012611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.012898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.012930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.013245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.013275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.013591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.013630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.013798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.013829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.014076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.014107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.014292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.014310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.014521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.014540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.014689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.014708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.014854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.014872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.015104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.015123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.015276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.015295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.015434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.015453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.015733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.015753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.015965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.015984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.016277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.016296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.016492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.016510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.016665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.016696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.016867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.016898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.017177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.017247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.017516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.017549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.017811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.017844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.018117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.018147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.018433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.018471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.018647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.018678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.018835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.018864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.019123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.019153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.019330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.019360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.019535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.019565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.019733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.019764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.020054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.020085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.020258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.020288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.020509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.901 [2024-07-25 12:16:58.020539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.901 qpair failed and we were unable to recover it. 00:30:20.901 [2024-07-25 12:16:58.020816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.020848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.021028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.021059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.021351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.021381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.021619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.021651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.021882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.021913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.022136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.022166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.022407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.022438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.022600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.022640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.022819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.022850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.023082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.023112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.023262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.023284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.023506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.023536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.023772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.023804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.024069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.024115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.024319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.024337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.024493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.024524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.024752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.024783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.025039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.025069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.025196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.025227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.025438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.025456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.025716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.025736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.025980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.026011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.026232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.026263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.026442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.026460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.026583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.026601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.026807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.026826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.026978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.026996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.027266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.027296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.027483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.027514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.027672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.027704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.027926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.027956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.028193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.028223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.028490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.028521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.028682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.028713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.028882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.028912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.029148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.029179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.029485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.029515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.902 [2024-07-25 12:16:58.029672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.902 [2024-07-25 12:16:58.029704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.902 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.029944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.029974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.030132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.030151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.030301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.030332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.030506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.030536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.030702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.030734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.030982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.031012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.031353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.031383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.031545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.031563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.031689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.031709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.031845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.031863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.032153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.032171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.032384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.032403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.032618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.032637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.032842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.032861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.033009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.033039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.033195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.033231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.033452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.033483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.033661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.033692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.033947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.033978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.034209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.034240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.034421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.034439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.034643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.034675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.034848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.034878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.035050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.035081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.035299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.035318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.035484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.035514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.035696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.035729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.035904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.035934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.036096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.036126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.036354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.036373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.036609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.036631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.036759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.036778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.036996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.037027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.037183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.037214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.037363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.037393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.903 [2024-07-25 12:16:58.037564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.903 [2024-07-25 12:16:58.037596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.903 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.037937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.037971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.038139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.038169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.038355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.038386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.038539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.038570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.038802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.038834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.039055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.039086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.039320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.039339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.039546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.039576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.039764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.039799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.040041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.040072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.040310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.040340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.040564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.040594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.040761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.040791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.041109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.041152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.041309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.041327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.041484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.041502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.041639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.041658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.041966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.041997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.042234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.042265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.042484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.042520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.042755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.042787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.043103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.043134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.043446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.043477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.043644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.043663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.043855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.043899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.044133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.044163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.044329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.044360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.044580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.044599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.044823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.044841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.045085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.045115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.904 [2024-07-25 12:16:58.045298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.904 [2024-07-25 12:16:58.045328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.904 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.045493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.045523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.045752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.045773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.045926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.045957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.046129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.046160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.046391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.046423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.046674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.046705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.046892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.046923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.047147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.047178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.047351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.047369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.047564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.047582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.047831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.047850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.047991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.048024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.048284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.048315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.048550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.048580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.048758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.048791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.049026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.049059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.049308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.049338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.049511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.049541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.049761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.049780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.049920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.049939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.050141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.050160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.050439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.050469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.050627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.050658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.050949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.050980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.051206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.051237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.051584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.051624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.051779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.051810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.052028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.052059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.052278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.052314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.052478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.052509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.052739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.052771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.052951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.052982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.053159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.053190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.053352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.053383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.053631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.053651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.053862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.053880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.905 [2024-07-25 12:16:58.054024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.905 [2024-07-25 12:16:58.054043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.905 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.054165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.054208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.054365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.054396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.054626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.054657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.054878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.054909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.055146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.055177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.055430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.055449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.055647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.055667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.055796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.055814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.055961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.055991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.056249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.056280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.056507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.056538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.056830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.056849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.057046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.057064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.057188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.057207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.057334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.057352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.057494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.057512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.057642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.057662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.057903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.057933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.058174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.058205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.058425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.058443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.058708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.058727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.058880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.058899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.059050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.059080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.059222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.059253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.059475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.059506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.059659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.059691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.059924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.059955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.060101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.060133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.060453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.060483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.060713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.060732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.060950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.060980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.061199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.061235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.061457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.061488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.061706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.061725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.062046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.906 [2024-07-25 12:16:58.062076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.906 qpair failed and we were unable to recover it. 00:30:20.906 [2024-07-25 12:16:58.062252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.062283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.062573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.062614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.062840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.062870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.063185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.063216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.063471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.063502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.063757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.063789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.064034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.064065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.064381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.064412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.064700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.064732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.064968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.064999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.065228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.065258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.065433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.065464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.065758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.065777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.066076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.066107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.066335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.066366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.066595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.066621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.066914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.066949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.067265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.067296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.067530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.067561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.067807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.067839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.068060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.068091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.068249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.068279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.068530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.068560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.068838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.068858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.069149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.069167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.069419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.069450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.069755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.069787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.069954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.069985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.070285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.070316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.070477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.070507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.070672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.070692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.070957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.070988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.071170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.071201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.071385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.071415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.071650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.071669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.071864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.071883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.072100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.072121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.072330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.907 [2024-07-25 12:16:58.072349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.907 qpair failed and we were unable to recover it. 00:30:20.907 [2024-07-25 12:16:58.072547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.072565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.072772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.072792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.073023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.073041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.073278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.073297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.073501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.073519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.073758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.073778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.073928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.073958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.074195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.074225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.074511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.074541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.074766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.074785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.075053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.075072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.075285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.075303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.075598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.075624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.075779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.075797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.076021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.076052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.076367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.076398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.076660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.076692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.077025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.077056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.077402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.077432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.077657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.077689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.078002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.078032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.078263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.078294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.078442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.078473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.078806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.078838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.079129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.079160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.079337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.079368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.079490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.079508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.079708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.079740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.080032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.080062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.080312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.080342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.080574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.080612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.080778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.080809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.908 [2024-07-25 12:16:58.081031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.908 [2024-07-25 12:16:58.081062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.908 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.081299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.081330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.081563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.081581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.081864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.081884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.082152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.082170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.082454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.082473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.082688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.082711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.082974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.083018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.083252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.083283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.083459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.083478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.083757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.083789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.084094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.084124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.084359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.084390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.084570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.084600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.084779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.084810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.085051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.085082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.085389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.085420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.085744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.085776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.086070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.086101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.086278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.086309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.086470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.086500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.086750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.086786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.086925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.086956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.087250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.087280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.087519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.087550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.087789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.087808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.088030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.088049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.088338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.088368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.088598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.088637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.088872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.088891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.089171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.089190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.089384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.089403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.089622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.089641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.089849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.089868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.090110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.090128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.090442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.090460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.090760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.909 [2024-07-25 12:16:58.090792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.909 qpair failed and we were unable to recover it. 00:30:20.909 [2024-07-25 12:16:58.091135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.091165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.091455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.091485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.091746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.091778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.092090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.092121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.092430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.092461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.092789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.092821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.093035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.093066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.093324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.093355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.093668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.093700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.094012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.094048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.094211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.094230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.094435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.094465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.094624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.094656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.094886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.094918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.095159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.095189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.095452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.095483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.095707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.095738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.096083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.096115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.096402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.096433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.096744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.096775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.097012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.097043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.097198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.097229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.097460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.097490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.097711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.097730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.098029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.098059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.098296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.098327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.098553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.098585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.098880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.098926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.099181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.099211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.099370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.099388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.099649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.099668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.099931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.099975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.100215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.100246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.100571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.100611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.100934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.100966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.101133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.101164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.910 qpair failed and we were unable to recover it. 00:30:20.910 [2024-07-25 12:16:58.101349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.910 [2024-07-25 12:16:58.101380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.101641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.101673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.101913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.101931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.102227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.102246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.102479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.102497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.102777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.102808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.102969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.103000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.103289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.103319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.103617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.103636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.103831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.103850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.104061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.104080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.104286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.104305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.104513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.104532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.104727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.104748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.104906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.104924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.105139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.105169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.105457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.105488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.105737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.105757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.105957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.105975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.106184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.106202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.106414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.106446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.106735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.106767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.107004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.107034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.107214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.107245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.107468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.107486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.107689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.107709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.107861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.107904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.108146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.108177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.108414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.108445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.108671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.108690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.108951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.108991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.109159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.109189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.109342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.109373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.911 qpair failed and we were unable to recover it. 00:30:20.911 [2024-07-25 12:16:58.109621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.911 [2024-07-25 12:16:58.109652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.109893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.109924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.110217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.110247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.110565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.110595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.110823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.110842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.111051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.111069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.111336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.111377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.111645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.111678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.111848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.111878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.112042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.112060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.112290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.112309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.112522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.112541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.112816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.112848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.113182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.113213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.113449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.113479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.113715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.113746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.113980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.114011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.114187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.114218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.114403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.114422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.114659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.114691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.114980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.115016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.115223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.115254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.115484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.115502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.115639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.115658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.115857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.115876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.116168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.116199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.116346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.116377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.116538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.116569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.116812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.116844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.117081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.117112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.117401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.117439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.117652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.117671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.117877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.117895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.118081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.118099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.118401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.118432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.118652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.118684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.118858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.118889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.119109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.912 [2024-07-25 12:16:58.119139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.912 qpair failed and we were unable to recover it. 00:30:20.912 [2024-07-25 12:16:58.119320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.119350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.119638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.119670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.119964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.120000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.120218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.120249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.120479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.120509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.120805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.120837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.121099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.121130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.121368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.121399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.121630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.121649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.121812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.121831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.121976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.121995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.122209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.122239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.122579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.122617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.122835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.122854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.123055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.123073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.123284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.123315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.123631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.123662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.123886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.123904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.124131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.124149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.124303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.124321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.124491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.124510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.124641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.124661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.124856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.124878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.125019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.125037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.125238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.125269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.125520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.125550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.125804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.125836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.126000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.126030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.126254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.126284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.126515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.126533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.126667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.126686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.126897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.913 [2024-07-25 12:16:58.126928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.913 qpair failed and we were unable to recover it. 00:30:20.913 [2024-07-25 12:16:58.127106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.127137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.127398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.127428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.127572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.127591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.127816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.127835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.128127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.128158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.128475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.128505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.128734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.128754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.128957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.128976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.129106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.129124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.129277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.129296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.129438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.129458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.129680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.129713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.129872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.129903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.130192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.130223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.130547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.130578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.130905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.130924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.131140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.131158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.131369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.131391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.131586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.131626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.131861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.131892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.132131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.132162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.132326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.132357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.132623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.132655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.132889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.132908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.133201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.133232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.133413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.133444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.133738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.133771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.134003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.134034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.134266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.134297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.134516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.134728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.134747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.134911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.134942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.135095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.135126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.135361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.135392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.135620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.135651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.135885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.135903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.136114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.914 [2024-07-25 12:16:58.136132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.914 qpair failed and we were unable to recover it. 00:30:20.914 [2024-07-25 12:16:58.136245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.136264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.136467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.136486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.136724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.136759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.136934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.136965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.137259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.137290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.137637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.137668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.137939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.137970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.138159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.138190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.138412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.138443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.138676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.138695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.138959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.138977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.139254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.139273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.139489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.139508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.139807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.139838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.140133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.140164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.140469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.140499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.140819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.140851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.141183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.141219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.141462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.141492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.141665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.141697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.141930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.141952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.142254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.142274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.142486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.142505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.142850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.142913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.143167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.143201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.143485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.143528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.143699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.143731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.143963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.143994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.144156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.144175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.144366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.144384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.144596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.144623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.144803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.144821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.145031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.145062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.145244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.145275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.145559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.145578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.145794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.145813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.146006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.146025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.915 [2024-07-25 12:16:58.146274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.915 [2024-07-25 12:16:58.146305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.915 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.146570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.146601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.146790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.146809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.147020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.147050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.147209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.147239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.147560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.147592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.147859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.147879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.148758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.148793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.148963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.148982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.149187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.149205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.149472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.149491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.149708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.149727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.149881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.149899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.150099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.150117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.150237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.150255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.150412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.150430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.150627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.150646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.150844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.150863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.150991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.151009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.151220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.151239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.151554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.151572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:20.916 [2024-07-25 12:16:58.151724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.916 [2024-07-25 12:16:58.151743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.916 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.152012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.152030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.152156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.152178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.152328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.152346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.152457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.152476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.152668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.152688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.152796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.152814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.152970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.152989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.153144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.153162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.153315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.153333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.196 qpair failed and we were unable to recover it. 00:30:21.196 [2024-07-25 12:16:58.153932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.196 [2024-07-25 12:16:58.153953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.154130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.154147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.154346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.154364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.154572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.154591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.154734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.154758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.155046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.155065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.155232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.155250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.155397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.155415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.155968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.155999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.156238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.156257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.156486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.156504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.156822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.156855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.157030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.157061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.157433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.157464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.157706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.157738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.157970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.158001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.158363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.158394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.158629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.158662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.158922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.158953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.159138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.159169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.159354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.159385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.159615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.159646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.159884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.159902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.160061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.160079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.160304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.160322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.160473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.160491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.160683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.160702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.160860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.160879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.161141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.161159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.161400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.161418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.161634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.161654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.161886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.161905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.162048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.162070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.162270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.162288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.162492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.162511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.162650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.162670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.162870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.162889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.197 [2024-07-25 12:16:58.163110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.197 [2024-07-25 12:16:58.163129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.197 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.163329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.163348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.163489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.163804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.163823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.164025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.164044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.164170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.164189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.164467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.164486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.164625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.164645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.164790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.164809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.165025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.165043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.165184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.165203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.165346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.165364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.165559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.165578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.165799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.165819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.165960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.165978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.166244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.166263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.166477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.166495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.166706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.166727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.166952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.166970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.167164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.167182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.167385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.167404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.167667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.167686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.167901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.167920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.168155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.168186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.168474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.168505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.168740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.168760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.168953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.168983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.169220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.169251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.169473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.169504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.169791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.169823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.170070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.170101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.170262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.170292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.170509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.170527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.170675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.170694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.170836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.170854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.170995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.171017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.171222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.171239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.171502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.171521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.198 qpair failed and we were unable to recover it. 00:30:21.198 [2024-07-25 12:16:58.171645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.198 [2024-07-25 12:16:58.171664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.171764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.171782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.172023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.172053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.172192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.172222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.172447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.172478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.172724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.172756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.172979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.173010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.173178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.173208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.173520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.173538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.173683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.173702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.173919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.173949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.174299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.174330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.174493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.174523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.174820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.174839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.175039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.175057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.175254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.175273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.175490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.175509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.175664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.175683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.175804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.175835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.176072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.176102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.176231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.176261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.176416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.176446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.176619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.176651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.176801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.176832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.177013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.177043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.177202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.177232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.177410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.177441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.177700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.177719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.177834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.177865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.178179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.178210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.178450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.178480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.178789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.178821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.179003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.179022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.179141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.179159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.179367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.179385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.179494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.179512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.179716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.179735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.179837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.199 [2024-07-25 12:16:58.179858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.199 qpair failed and we were unable to recover it. 00:30:21.199 [2024-07-25 12:16:58.180147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.180177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.180492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.180523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.180754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.180774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.180911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.180930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.181071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.181090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.181294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.181313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.181575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.181593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.181799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.181818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.182029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.182048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.182185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.182203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.182435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.182465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.182654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.182687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.183007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.183037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.183335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.183366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.183657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.183688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.183941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.183972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.184287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.184318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.184467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.184497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.184734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.184765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.185003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.185034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.185263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.185293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.185524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.185555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.185818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.185849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.186130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.186161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.186488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.186518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.186745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.186786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.187142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.187174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.187400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.187430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.187638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.187670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.187903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.187933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.188253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.188284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.188504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.188534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.188875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.188907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.189167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.189197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.189485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.189515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.189812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.189844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.190074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.190105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.200 [2024-07-25 12:16:58.190350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.200 [2024-07-25 12:16:58.190369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.200 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.190550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.190580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.190912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.190934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.191145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.191163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.191384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.191402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.191610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.191630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.191759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.191777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.191972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.191990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.192203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.192221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.192503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.192533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.192728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.192761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.192925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.192956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.193221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.193251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.193495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.193525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.193814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.193833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.194042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.194060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.194299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.194317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.194534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.194565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.194762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.194782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.194969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.194999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.195223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.195253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.195495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.195525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.195773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.195792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.196051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.196081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.196339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.196369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.196632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.196664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.196982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.197012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.197300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.197331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.197568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.197598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.197792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.197824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.198108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.198138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.201 [2024-07-25 12:16:58.198308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.201 [2024-07-25 12:16:58.198338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.201 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.198559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.198589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.198844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.198863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.199211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.199241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.199533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.199563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.199809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.199840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.200073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.200104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.200421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.200451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.200694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.200726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.200911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.200942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.201100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.201130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.201385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.201420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.201593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.201617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.201811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.201829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.202119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.202149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.202437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.202467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.202684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.202716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.202894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.202912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.203129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.203147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.203410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.203428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.203715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.203746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.203982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.204012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.204261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.204291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.204581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.204632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.204804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.204822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.204969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.205010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.205224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.205254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.205416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.205446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.205690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.205722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.206008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.206026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.206325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.206355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.206643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.206675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.206909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.206939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.207196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.207226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.207539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.207569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.207832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.207863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.208096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.208114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.208378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.208422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.202 qpair failed and we were unable to recover it. 00:30:21.202 [2024-07-25 12:16:58.208774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.202 [2024-07-25 12:16:58.208807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.209072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.209103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.209374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.209405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.209587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.209630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.209799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.209817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.210051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.210081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.210309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.210339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.210596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.210638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.210927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.210957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.211198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.211229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.211467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.211498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.211717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.211749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.212044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.212088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.212376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.212411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.212589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.212629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.212875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.212905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.213232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.213262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.213562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.213593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.213836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.213867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.214133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.214164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.214425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.214455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.214653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.214672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.214894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.214913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.215234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.215253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.215449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.215467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.215728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.215748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.216012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.216030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.216246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.216265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.216461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.216479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.216767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.216812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.217053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.217084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.217311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.217341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.217629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.217661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.217838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.217868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.218112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.218143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.218309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.218339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.218628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.218660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.218856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.203 [2024-07-25 12:16:58.218886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.203 qpair failed and we were unable to recover it. 00:30:21.203 [2024-07-25 12:16:58.219125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.219155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.219372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.219391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.219533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.219552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.219847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.219879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.220058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.220088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.220263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.220294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.220615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.220646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.220879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.220909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.221256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.221287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.221553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.221583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.221812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.221831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.222027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.222045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.222246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.222265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.222563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.222594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.222922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.222965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.223181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.223203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.223344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.223362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.223684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.223716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.223883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.223902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.224101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.224132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.224362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.224392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.224662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.224694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.225009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.225039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.225261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.225291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.225625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.225657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.225949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.225980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.226231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.226261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.226610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.226642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.226940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.226971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.227267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.227297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.227544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.227574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.227899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.227931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.228164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.228194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.228513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.228544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.228790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.228822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.229041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.229072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.229378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.229408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.229704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.204 [2024-07-25 12:16:58.229736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.204 qpair failed and we were unable to recover it. 00:30:21.204 [2024-07-25 12:16:58.229973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.230003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.230240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.230270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.230589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.230628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.230808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.230838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.231134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.231166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.231403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.231433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.231592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.231632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.231948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.231978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.232290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.232320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.232567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.232598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.232873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.232904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.233081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.233112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.233328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.233347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.233543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.233561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.233773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.233793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.233946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.233976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.234194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.234225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.234457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.234493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.234664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.234695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.234943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.234962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.235171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.235201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.235493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.235524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.235788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.235819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.236125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.236170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.236469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.236500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.236736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.236767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.237066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.237100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.237423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.237453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.237676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.237708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.237897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.237928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.238163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.238182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.238479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.238510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.238802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.238834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.205 qpair failed and we were unable to recover it. 00:30:21.205 [2024-07-25 12:16:58.239149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.205 [2024-07-25 12:16:58.239180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.239386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.239417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.239637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.239669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.239966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.239997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.240161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.240191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.240483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.240514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.240743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.240762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.240968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.240999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.241218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.241249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.241544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.241575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.241845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.241878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.242187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.242255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.242488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.242522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.242842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.242875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.243165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.243196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.243427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.243457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.243641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.243673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.243913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.243943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.244257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.244287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.244518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.244548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.244713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.244734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.245032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.245062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.245234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.245264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.245524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.245555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.245783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.245815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.246046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.246065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.246218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.246248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.246417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.246448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.246764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.246795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.247067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.247098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.247332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.247362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.247627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.247670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.247925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.247946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.248208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.248247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.248415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.248445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.248747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.248779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.249016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.249046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.249225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.206 [2024-07-25 12:16:58.249256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.206 qpair failed and we were unable to recover it. 00:30:21.206 [2024-07-25 12:16:58.249511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.249542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.249774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.249793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.250080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.250099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.250363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.250381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.250586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.250621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.250892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.250910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.251149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.251167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.251457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.251475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.251684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.251703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.251932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.251951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.252097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.252115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.252323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.252342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.252554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.252584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.252824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.252860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.253081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.253099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.253388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.253418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.253648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.253680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.253940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.253971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.254258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.254288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.254551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.254581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.254798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.254816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.254993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.255011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.255239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.255270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.255592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.255634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.255857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.255876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.256074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.256104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.256327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.256358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.256655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.256687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.256930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.256961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.257133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.257152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.257377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.257395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.257550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.257569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.257778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.257810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.258123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.258154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.258384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.258415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.258649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.258680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.258852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.258883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.259123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.207 [2024-07-25 12:16:58.259141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.207 qpair failed and we were unable to recover it. 00:30:21.207 [2024-07-25 12:16:58.259345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.259364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.259509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.259540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.259741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.259772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.260088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.260119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.260302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.260335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 119443 Killed "${NVMF_APP[@]}" "$@" 00:30:21.208 [2024-07-25 12:16:58.260676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.260709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.260882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.260913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:21.208 [2024-07-25 12:16:58.261078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.261097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.261242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.261260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:21.208 [2024-07-25 12:16:58.261524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.261543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:21.208 [2024-07-25 12:16:58.261810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.261830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:21.208 [2024-07-25 12:16:58.262092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.262111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.208 [2024-07-25 12:16:58.262383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.262402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.262557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.262576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.262721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.262739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.262936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.262954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.263097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.263116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.263403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.263422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.263565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.263584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.263855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.263874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.264010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.264028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.264150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.264168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.264359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.264377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.264567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.264586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.264807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.264826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.265033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.265052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.265184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.265203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.265398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.265416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.265626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.265645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.265783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.265802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.266018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.266036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.266269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.266288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.266504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.266523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.266803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.266822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.267072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.208 [2024-07-25 12:16:58.267091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.208 qpair failed and we were unable to recover it. 00:30:21.208 [2024-07-25 12:16:58.267378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.267397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.267598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.267628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.267775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.267794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.268056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.268074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.268336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.268357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.268550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.268569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.268770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.268789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=120416 00:30:21.209 [2024-07-25 12:16:58.268998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.269017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 120416 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:21.209 [2024-07-25 12:16:58.269336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.269356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 120416 ']' 00:30:21.209 [2024-07-25 12:16:58.269562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.269581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.209 [2024-07-25 12:16:58.269856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.269876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.269995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.270013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:21.209 [2024-07-25 12:16:58.270213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.270232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.270369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.270388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.209 [2024-07-25 12:16:58.270594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.270621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:21.209 [2024-07-25 12:16:58.270883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.270903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 12:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.209 [2024-07-25 12:16:58.271169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.271187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.271337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.271355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.271630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.271649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.271859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.271878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.272084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.272103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.272392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.272410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.272613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.272632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.272915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.272934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.273143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.273161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.273452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.273471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.273686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.273708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.274006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.274024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.274241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.274259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.274466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.274484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.274583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.274600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.274877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.274896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.275126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.275144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.209 qpair failed and we were unable to recover it. 00:30:21.209 [2024-07-25 12:16:58.275357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.209 [2024-07-25 12:16:58.275376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.275624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.275643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.275797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.275816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.276078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.276097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.276305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.276324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.276544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.276562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.276707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.276726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.276972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.276991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.277191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.277210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.277435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.277454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.277600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.277625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.277836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.277855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.278027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.278046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.278267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.278285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.278514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.278533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.278791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.278810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.279072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.279092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.279304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.279322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.279537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.279555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.279792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.279812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.280086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.280104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.280297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.280315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.280539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.280557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.280791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.280811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.281021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.281040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.281274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.281292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.281499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.281517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.281764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.281784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.281991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.282009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.282157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.282176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.282347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.282365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.282570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.282588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.282727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.282745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.210 [2024-07-25 12:16:58.283063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.210 [2024-07-25 12:16:58.283085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.210 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.283378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.283396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.283538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.283557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.283766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.283785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.284050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.284068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.284287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.284305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.284461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.284479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.284619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.284638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.284847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.284866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.285085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.285103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.285367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.285386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.285540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.285559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.285869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.285888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.286180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.286199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.286350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.286369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.286585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.286610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.286805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.286824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.287017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.287038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.287246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.287265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.287478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.287497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.287647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.287667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.287912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.287931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.288140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.288159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.288365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.288384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.288575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.288594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.288891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.288910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.289154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.289173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.289388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.289407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.289617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.289639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.289897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.289916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.290058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.290076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.290365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.290384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.290597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.290622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.290832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.290850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.291063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.291082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.291303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.291322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.291528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.291548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.291817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.291837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.211 qpair failed and we were unable to recover it. 00:30:21.211 [2024-07-25 12:16:58.292073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.211 [2024-07-25 12:16:58.292091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.292366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.292384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.292590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.292620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.292928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.292947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.293208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.293227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.293363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.293382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.293586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.293610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.293889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.293907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.294170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.294189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.294454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.294472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.294694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.294713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.294959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.294978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.295151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.295170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.295324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.295345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.295559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.295577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.295738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.295759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.295978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.295997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.296176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.296196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.296412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.296431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.296652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.296671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.296894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.296913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.297118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.297136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.297347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.297365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.297627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.297646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.297916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.297935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.298171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.298190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.298415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.298435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.298655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.298673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.298885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.298904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.299066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.299085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.299357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.299376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.299509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.299528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.299740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.299759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.300050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.300068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.300284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.300303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.300593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.300620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.300831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.300849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.212 qpair failed and we were unable to recover it. 00:30:21.212 [2024-07-25 12:16:58.301114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.212 [2024-07-25 12:16:58.301133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.301346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.301364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.301570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.301588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.301797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.301816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.302113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.302132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.302407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.302428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.302641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.302660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.302940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.302959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.303172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.303190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.303480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.303499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.303641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.303662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.303878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.303896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.304174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.304194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.304408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.304427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.304691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.304710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.304961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.304979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.305208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.305227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.305440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.305459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.305598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.305629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.305897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.305916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.306109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.306128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.306343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.306361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.306526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.306545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.306857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.306876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.307143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.307161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.307375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.307395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.307614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.307634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.307907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.307925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.308188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.308206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.308367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.308386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.308528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.213 [2024-07-25 12:16:58.308546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.213 qpair failed and we were unable to recover it. 00:30:21.213 [2024-07-25 12:16:58.308738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.308757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.309030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.309098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.309391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.309424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.309682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.309717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.310003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.310024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.310258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.310276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.310477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.310495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.310785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.310805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.310929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.310947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.311097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.311115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.311345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.311363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.311656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.311675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.311829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.311847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.312109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.312128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.312334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.312355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.312638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.312657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.312791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.312809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.313001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.313020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.313217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.313236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.313450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.313469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.313627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.313647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.313862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.313880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.314094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.314112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.314321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.314339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.314558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.314577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.314844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.314863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.315155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.315174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.315485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.315503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.315640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.315659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.315887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.315905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.316192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.316211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.316449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.214 [2024-07-25 12:16:58.316467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.214 qpair failed and we were unable to recover it. 00:30:21.214 [2024-07-25 12:16:58.316759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.316779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.317045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.317063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.317347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.317366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.317490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.317509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.317723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.317742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.318007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.318026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.318257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.318275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.318537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.318556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.318747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.318766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.318921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.318939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.319229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.319248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.319452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.319471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.319759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.319778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.320065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.320083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.320306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.320325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.320476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.320494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.320733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.320752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.321040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.321058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.321322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.321341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.321628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.321647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.321845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.321863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.322067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.322086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.322220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.322239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.322436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.322454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.322666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.322685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.322950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.322968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.323200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.215 [2024-07-25 12:16:58.323219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.215 qpair failed and we were unable to recover it. 00:30:21.215 [2024-07-25 12:16:58.323231] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:30:21.215 [2024-07-25 12:16:58.323282] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.215 [2024-07-25 12:16:58.323427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.323445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.323732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.323749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.323892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.323911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.324044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.324062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.324328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.324347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.324547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.324566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.324853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.324872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.325012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.325030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.325263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.325282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.325600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.325625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.325890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.325909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.326059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.326078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.326301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.326320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.326615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.326634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.326793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.326812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.327100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.327119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.327431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.327449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.327735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.327755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.328016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.328035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.328258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.328276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.328485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.328503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.328793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.328812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.329074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.329093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.329299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.329317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.329455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.329473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.329698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.329718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.329986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.330005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.330134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.330153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.330415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.330433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.330630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.330649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.330862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.330880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.331014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.331033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.331298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.331316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.331529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.331547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.331812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.331835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.332048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.332066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.332219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.332238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.216 [2024-07-25 12:16:58.332525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.216 [2024-07-25 12:16:58.332544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.216 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.332690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.332709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.332920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.332939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.333134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.333153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.333358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.333377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.333668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.333686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.333954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.333973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.334184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.334203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.334408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.334427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.334691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.334710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.335001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.335019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.335182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.335201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.335357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.335375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.335638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.335658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.335934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.335952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.336156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.336175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.336469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.336488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.336698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.336928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.336947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.337219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.337239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.337516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.337535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.337677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.337697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.337985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.338004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.338265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.338284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.338482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.338501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.338795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.338814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.339105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.339123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.339336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.339355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.339561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.339580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.339860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.339879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.340085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.340104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.340294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.340313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.340583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.340609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.340767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.340786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.340927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.340946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.341076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.341095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.341386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.341404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.341618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.217 [2024-07-25 12:16:58.341641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.217 qpair failed and we were unable to recover it. 00:30:21.217 [2024-07-25 12:16:58.341846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.341864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.342083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.342101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.342356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.342374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.342638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.342657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.342886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.342904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.343112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.343130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.343344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.343363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.343574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.343592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.343863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.343882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.344155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.344173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.344466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.344484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.344678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.344697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.344937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.344956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.345272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.345290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.345453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.345471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.345684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.345703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.346013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.346031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.346319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.346338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.346484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.346502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.346708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.346727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.346923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.346941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.347154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.347173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.347458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.347476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.347614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.347633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.347838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.347856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.348060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.348079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.348188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.348207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.348407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.348426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.348744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.348763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.348961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.348979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.349272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.349291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.349500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.349518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.349728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.349747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.350024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.350043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.350183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.350201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.350490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.350509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.350715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.350734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.218 [2024-07-25 12:16:58.350960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.218 [2024-07-25 12:16:58.350979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.218 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.351192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.351210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.351424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.351445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.351573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.351592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.351754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.351774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.351905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.351924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.352082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.352101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.352388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.352407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.352631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.352650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.352858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.352876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.353018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.353037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.353283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.353301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.353512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.353530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.353770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.353790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.353997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.354015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.354147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.354166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.354409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.354430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.354650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.354669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.354893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.354912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.355037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.355056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.355290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.355309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.355594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.355619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.355834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.355852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.356114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.356132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.356354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.356373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.356510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.356529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.356770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.356789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.357027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.357046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.357160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.357179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.357548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.357625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.357886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.357922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.358090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.358121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.358411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.358442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.358682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.358714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.359006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.359037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.359237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.359259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.359468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.359487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.359722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.219 [2024-07-25 12:16:58.359741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.219 qpair failed and we were unable to recover it. 00:30:21.219 [2024-07-25 12:16:58.359977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.359995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.360286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.360305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.360566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.360584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.360804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.360823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.360973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.360994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.361307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.361326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.220 [2024-07-25 12:16:58.361594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.361630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.361838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.361856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.362118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.362136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.362290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.362308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.362599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.362626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.362821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.362840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.363102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.363120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.363413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.363432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.363635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.363654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.363970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.363988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.364128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.364147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.364358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.364376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.364585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.364610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.364826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.364846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.365139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.365158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.365304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.365322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.365534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.365553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.365842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.365861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.366054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.366073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.366379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.366397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.366567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.366585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.366880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.366898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.367163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.367181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.367506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.367525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.367822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.367842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.368052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.220 [2024-07-25 12:16:58.368071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.220 qpair failed and we were unable to recover it. 00:30:21.220 [2024-07-25 12:16:58.368363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.368381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.368575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.368594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.368887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.368906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.369120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.369139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.369292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.369310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.369533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.369552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.369747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.369766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.369993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.370012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.370305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.370324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.370532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.370550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.370779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.370799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.371011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.371030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.371240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.371262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.371493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.371511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.371707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.371727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.371991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.372009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.372306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.372324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.372625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.372644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.372906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.372924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.373209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.373227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.373430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.373448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.373652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.373671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.373931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.373950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.374236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.374254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.374543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.374561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.374792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.374811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.375030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.375049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.375362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.375380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.375496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.375515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.375679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.375699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.375907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.375926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.376126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.376145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.376341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.376360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.376521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.376540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.376739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.376758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.376972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.376990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.377195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.377214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.221 qpair failed and we were unable to recover it. 00:30:21.221 [2024-07-25 12:16:58.377443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.221 [2024-07-25 12:16:58.377461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.377731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.377750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.377962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.377981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.378192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.378210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.378418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.378437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.378657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.378677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.378814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.378833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.378984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.379002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.379151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.379169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.379364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.379382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.379595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.379622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.379813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.379831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.380137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.380156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.380299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.380318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.380522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.380541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.380808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.380830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.380985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.381004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.381269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.381288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.381548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.381567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.381864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.381883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.382076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.382094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.382357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.382376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.382545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.382563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.382775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.382794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.382929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.382948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.383154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.383173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.383397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.383415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.383703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.383722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.383858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.383876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.384171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.384190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.384405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.384423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.384637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.384656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.384859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.384878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.385085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.385103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.385240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.385259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.385467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.385485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.385685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.385705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.385969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.222 [2024-07-25 12:16:58.385987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.222 qpair failed and we were unable to recover it. 00:30:21.222 [2024-07-25 12:16:58.386197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.386216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.386361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.386379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.386528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.386546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.386762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.386781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.386929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.386948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.387161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.387182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.387483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.387501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.387660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.387679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.387906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.387925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.388069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.388088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.388308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.388327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.388620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.388639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.388841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.388859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.389011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.389029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.389167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.389186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.389332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.389351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.389521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.389539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.389738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.389762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.390055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.390074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.390303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.390321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.390584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.390610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.390826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.390845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.391051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.391070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.391267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.391286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.391550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.391568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.391796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.391815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.392035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.392054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.392271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.392291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.392548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.392566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.392813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.392832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.393053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.393071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.393289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.393308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.393514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.393532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.393741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.393761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.393975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.393994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.394291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.394310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.394559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.394578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.223 qpair failed and we were unable to recover it. 00:30:21.223 [2024-07-25 12:16:58.394808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.223 [2024-07-25 12:16:58.394827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.395165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.395184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.395461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.395479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.395743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.395762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.395975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.395993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.396206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.396224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.396418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.396436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.396652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.396671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.396884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.396902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.397192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.397211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.397377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.397396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.397640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.397660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.397930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.397948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.398160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.398179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.398477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.398495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.398757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.398776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.398979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.398997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.399200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.399218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.399454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.399472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.399649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.399668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.399959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.399981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.400224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.400242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.400449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.400467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.400748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.400767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.400973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.400992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.401187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.401206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.401415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.401434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.401660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.401680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.401947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.401965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.402202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.402221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.402442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.402461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.402667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.402686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.402958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.224 [2024-07-25 12:16:58.402976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.224 qpair failed and we were unable to recover it. 00:30:21.224 [2024-07-25 12:16:58.403171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.403190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.403418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.403436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.403581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.403600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.403826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.403845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.404081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.404099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.404312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.404330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.404573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.404592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.404900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.404919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.405156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.405175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.405332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.405351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.405566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.405585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.405911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.405957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.406148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.406179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.406447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.406478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.406852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.406874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.407091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.407110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.407351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.407369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.407575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.407593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.407859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.407878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.408174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.408193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.408390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.408408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.408701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.408720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.408923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.408941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.409155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.409174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.409387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.409406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.409548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.409567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.409796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.409814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.410081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.410105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.410390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.410409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.410616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.410635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.410942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.410960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.411184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.411203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.411353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.411372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.411652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.411671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.411938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.411956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.412240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.412258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.412469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.412487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.225 [2024-07-25 12:16:58.412692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.225 [2024-07-25 12:16:58.412712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.225 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.412867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.412885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.413106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.413124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.413438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.413456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.413664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.413684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.413893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.413912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.414120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.414138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.414400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.414419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.414580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.414599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.414799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.414817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.415104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.415122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.415320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.415338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.415628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.415648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.415753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.415771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.415984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.416002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.416213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.416232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.416492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.416510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.416667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.416686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.416905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.416923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.417201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.417219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.417445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.417463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.417656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.417675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.417966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.417984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.418248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.418267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.418555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.418574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.418812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.418831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.419039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.419057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.419269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.419288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.419578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.419596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.419848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.419867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.420130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.420152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.420393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.420412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.420616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.420636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.420852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.420870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.421014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.421033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.421296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.421314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.421463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.421482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.421743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.226 [2024-07-25 12:16:58.421762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.226 qpair failed and we were unable to recover it. 00:30:21.226 [2024-07-25 12:16:58.421973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.421992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.422307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.422325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.422615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.422634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.422771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.422789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.423024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.423043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.423204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.423223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.423488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.423506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.423770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.423789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.424053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.424071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.424361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.424379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.424672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.424692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.424926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.424945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.425096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.425114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.425313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.425332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.425541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.425559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.425774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.425792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.426000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.426019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.426257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.426275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.426482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.426501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.426723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.426742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.426941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.426959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.427200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.427218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.427427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.427599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.427623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.427910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.427929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.428192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.428210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.428419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.428438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.428668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.428688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.428915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.428933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.429203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.429222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.429426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.429445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.429657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.429676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.429827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.429848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.430079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.430098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.430358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.430376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.430523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.430541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.430677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.430696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.431007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.227 [2024-07-25 12:16:58.431025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.227 qpair failed and we were unable to recover it. 00:30:21.227 [2024-07-25 12:16:58.431313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.431332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.431608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.431627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.431822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.431841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.431983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.432001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.432149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.432167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.432430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.432448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.432738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.432757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.433106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.433124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.433272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.433291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.433581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.433599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.433869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.433888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.434021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.434039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.434301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.434320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.434581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.434599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.434815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.434834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.435038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.435057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.435276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.435294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.435444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.435462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.435745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.435765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.436028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.436046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.436257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.436275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.436476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.436495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.436691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.436711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.436907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.436925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.437189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.437210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.437425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.437444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.437663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.437682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.437945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.437964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.438107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.438125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.438317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.438335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.438543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.438561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.438705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.438724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.439016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.439035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.228 [2024-07-25 12:16:58.439242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.228 [2024-07-25 12:16:58.439261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.228 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.439459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.439478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.439676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.439695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.439891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.439910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.440174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.440193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.440469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.440487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.440726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.440745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.440903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.440921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.441117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.441135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.441327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.441346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.441466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.441485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.441692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.441712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 [2024-07-25 12:16:58.441705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.441948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.441967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.442095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.442113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.442267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.442286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.442517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.442537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.442678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.442698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.442961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.442980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.443249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.443268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.443535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.443554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.443777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.443796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.444034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.444053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.444296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.444315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.444545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.444564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.444717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.444738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.444894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.444912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.445203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.445222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.445533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.445552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.445787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.445807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.446032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.446051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.446169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.446188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.446395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.229 [2024-07-25 12:16:58.446414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.229 qpair failed and we were unable to recover it. 00:30:21.229 [2024-07-25 12:16:58.446614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.446633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.446850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.446868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.447024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.447042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.447249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.447268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.447533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.447552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.447751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.447771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.447979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.447998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.448229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.448248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.448445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.448464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.448694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.448716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.448907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.448926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.449072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.449091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.449308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.449327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.449622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.449642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.449798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.449817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.450030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.450048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.450323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.450342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.450496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.450515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.450778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.450798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.450952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.450972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.451083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.451101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.451249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.451267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.451559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.451578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.451845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.451887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.452215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.452246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.452417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.452447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.452665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.452689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.452904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.452924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.453065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.453084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.453278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.453297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.453534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.453553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.453771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.453791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.454004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.454024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.454218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.454237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.454507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.454526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.454793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.230 [2024-07-25 12:16:58.454813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.230 qpair failed and we were unable to recover it. 00:30:21.230 [2024-07-25 12:16:58.455064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.455082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.455293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.455312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.455513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.455532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.455739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.455759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.456035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.456053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.456196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.456214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.456424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.456443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.456617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.456637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.456912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.456930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.457126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.457145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.457365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.457383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.457663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.457682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.457832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.457851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.458045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.458070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.458280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.458298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.458442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.458460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.458725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.458744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.459039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.459057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.459255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.459274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.459564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.459583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.459794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.459813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.459970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.459988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.460193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.460212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.460362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.460381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.460585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.460611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.460771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.460790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.461003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.461022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.461290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.461308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.461469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.461487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.461683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.461702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.461982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.462001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.462211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.462230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.462447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.462466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.462698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.462717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.462980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.462998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.463232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.463250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.463512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.463530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.463736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.463755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.464019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.464037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.231 [2024-07-25 12:16:58.464276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.231 [2024-07-25 12:16:58.464294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.231 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.464441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.464460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.464659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.464679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.464899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.464918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.465017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.465036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.465327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.465346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.465634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.465654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.465793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.465812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.466046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.466064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.466295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.466313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.466516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.466534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.466746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.466765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.466979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.466997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.467127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.467145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.467295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.467316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.467449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.467467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.467738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.467757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.467894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.467912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.468187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.468206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.468359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.468377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.468674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.468693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.468919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.468937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.469156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.469174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.469370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.469388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.469598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.469633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.469844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.469863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.470068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.470087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.470351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.470369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.470636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.470656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.470943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.470961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.471186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.471205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.471413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.471431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.471640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.471659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.471863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.471882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.472039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.472058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.472286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.472304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.472519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.472537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.472811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.232 [2024-07-25 12:16:58.472829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.232 qpair failed and we were unable to recover it. 00:30:21.232 [2024-07-25 12:16:58.473043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.473062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.473362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.473380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.473596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.473621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.473833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.473852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.474057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.474076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.474338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.474356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.474650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.474669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.474825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.474844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.475064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.475083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.233 [2024-07-25 12:16:58.475388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.233 [2024-07-25 12:16:58.475406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.233 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.475667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.475687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.475949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.475969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.476170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.476189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.476392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.476411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.476614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.476634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.476776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.476795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.476943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.476972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.477104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.477123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.477317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.477335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.477495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.477514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.477728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.510 [2024-07-25 12:16:58.477747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.510 qpair failed and we were unable to recover it. 00:30:21.510 [2024-07-25 12:16:58.478067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.478085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.478322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.478341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.478510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.478528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.478762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.478781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.478997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.479016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.479226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.479244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.479393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.479411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.479648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.479668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.479965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.479983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.480119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.480138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.480387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.480405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.480701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.480720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.480935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.480954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.481098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.481116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.481407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.481426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.481629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.481648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.481910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.481929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.482154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.482172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.482464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.482483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.482632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.482652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.482777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.482795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.483019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.483038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.483246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.483265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.483431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.483449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.483590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.483614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.483811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.483830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.484116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.484134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.484296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.484314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.484608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.484628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.484896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.484914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.485137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.485155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.485447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.485465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.485624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.485643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.485797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.485815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.486008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.486027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.486293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.486314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.486522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.486541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.511 qpair failed and we were unable to recover it. 00:30:21.511 [2024-07-25 12:16:58.486825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.511 [2024-07-25 12:16:58.486845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.487082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.487101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.487307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.487328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.487543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.487562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.487872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.487891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.488158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.488176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.488403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.488421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.488684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.488704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.488918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.488937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.489168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.489186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.489403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.489421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.489572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.489590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.489802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.489821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.490101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.490120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.490324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.490343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.490628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.490648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.490949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.490967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.491175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.491194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.491333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.491352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.491638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.491657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.491871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.491889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.492113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.492132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.492448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.492467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.492672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.492691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.492953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.492972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.493252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.493270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.493461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.493480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.493628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.493647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.493853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.493872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.494081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.494099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.494412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.494430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.494692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.494711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.494975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.494993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.495187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.495206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.495420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.495439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.495558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.495576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.495739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.495758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.512 qpair failed and we were unable to recover it. 00:30:21.512 [2024-07-25 12:16:58.495972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.512 [2024-07-25 12:16:58.495992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.496201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.496223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.496434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.496453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.496663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.496683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.496900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.496919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.497184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.497204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.497495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.497514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.497730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.497749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.497962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.497981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.498183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.498202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.498412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.498431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.498572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.498591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.498737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.498755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.498967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.498986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.499207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.499225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.499387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.499406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.499629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.499648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.499853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.499871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.500041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.500060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.500261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.500280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.500414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.500432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.500655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.500674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.500867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.500885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.501179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.501198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.501356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.501374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.501584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.501608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.501901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.501920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.502037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.502056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.502272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.502291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.502548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.502567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.502713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.502732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.502996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.503015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.503287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.503306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.503515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.503534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.503672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.503691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.503896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.503915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.504124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.504143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.513 [2024-07-25 12:16:58.504436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.513 [2024-07-25 12:16:58.504455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.513 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.504581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.504837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.504856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.505148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.505167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.505383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.505408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.505712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.505731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.505993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.506012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.506153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.506171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.506365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.506384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.506530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.506548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.506755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.506774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.506986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.507004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.507267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.507286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.507521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.507539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.507762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.507782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.507991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.508010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.508235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.508254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.508414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.508433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.508724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.508743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.509010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.509028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.509249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.509268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.509538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.509557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.509708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.509727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.509938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.509956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.510161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.510179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.510331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.510349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.510543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.510562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.510854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.510873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.511106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.511124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.511258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.511276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.511542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.511561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.511759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.511831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.512093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.512140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.512406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.512438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.512767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.512787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.513007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.513027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.514 [2024-07-25 12:16:58.513316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.514 [2024-07-25 12:16:58.513335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.514 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.513467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.513485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.513750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.513769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.514059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.514078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.514286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.514305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.514569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.514588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.514865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.514884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.515084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.515103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.515393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.515415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.515621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.515641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.515800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.515819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.515990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.516008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.516272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.516290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.516480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.516498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.516772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.516792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.517014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.517033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.517235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.517254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.517406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.517425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.517629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.517648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.517924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.517943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.518090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.518109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.518448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.518467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.518715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.518735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.518977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.518997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.519224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.519243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.519452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.519471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.519619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.519639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.519879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.519900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.520011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.520030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.520297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.520318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.520555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.520574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.515 qpair failed and we were unable to recover it. 00:30:21.515 [2024-07-25 12:16:58.520889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.515 [2024-07-25 12:16:58.520910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.521113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.521133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.521412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.521432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.521628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.521648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.521978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.522022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.522254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.522288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.522523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.522556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.522808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.522834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.523132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.523152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.523391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.523411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.523618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.523638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.523925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.523944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.524174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.524193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.524512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.524532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.524657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.524677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.524942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.524962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.525180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.525199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.525395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.525421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.525626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.525646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.525851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.525870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.526134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.526153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.526370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.526389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.526596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.526621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.526840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.526859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.527069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.527087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.527291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.527309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.527573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.527592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.527830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.527848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.528055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.528073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.528296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.528315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.528526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.528544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.528756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.528775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.529047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.529065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.529217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.529236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.529504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.529522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.529734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.529753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.530015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.530034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.530247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.516 [2024-07-25 12:16:58.530266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.516 qpair failed and we were unable to recover it. 00:30:21.516 [2024-07-25 12:16:58.530405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.530423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.530667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.530685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.530896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.530915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.531114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.531132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.531452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.531470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.531683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.531702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.531914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.531932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.532123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.532141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.532353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.532387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.532583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.532601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.532858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.532877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.533144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.533162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.533315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.533333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.533542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.533560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.533706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.533726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.533938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.533957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.534106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.534124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.534320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.534339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.534495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.534513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.534827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.534849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.535044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.535062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.535323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.535342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.535552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.535570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.535914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.535934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.536174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.536193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.536480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.536499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.536735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.536754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.536893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.536911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.537117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.537135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.537425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.537446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.537668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.537929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.537948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.538145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.538163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.538405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.538423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.538630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.538649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.538859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.538878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.539115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.539133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.517 [2024-07-25 12:16:58.539364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.517 [2024-07-25 12:16:58.539382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.517 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.539654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.539673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.539883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.539901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.540060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.540078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.540212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.540231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.540442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.540460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.540627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.540646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.540910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.540928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.541071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.541089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.541302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.541324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.541588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.541613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.541811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.541829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.541988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.542006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.542278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.542296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.542488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.542507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.542743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.542762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.543054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.543073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.543283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.543301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.543431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.543449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.543715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.543734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.543973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.543992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.544297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.544316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.544466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.544485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.544609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.544628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.544833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.544852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.545139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.545158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.545383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.545401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.545705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.545724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.545918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.545937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.546145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.546164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.546316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.546335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.546491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.546509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.546669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.546688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.546911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.546930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.547194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.547213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.547414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.547433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.547647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.547666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.547946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.547965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.518 qpair failed and we were unable to recover it. 00:30:21.518 [2024-07-25 12:16:58.548175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.518 [2024-07-25 12:16:58.548193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.548409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.548427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.548712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.548732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.548966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.548984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.549150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.549168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.549389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.549407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.549547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.549566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.549864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.549883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.550170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.550189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.550451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.550470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.550817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.550836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.551041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.551062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.551279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.551297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.551506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.551524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.551677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.551696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.551840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.551858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.552065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.552084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.552347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.552366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.552523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.552542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.552833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.552852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.553045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.553064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.553352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.553371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.553542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.553560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.553798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.553817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.553978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.553997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.554209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.554228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.554437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.554455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.554674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.554693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.554924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.554943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.555207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.555225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.555382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.555400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.555597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.555622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.555892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.555911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.556119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.556137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.556370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.556388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.519 qpair failed and we were unable to recover it. 00:30:21.519 [2024-07-25 12:16:58.556539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.519 [2024-07-25 12:16:58.556557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.556852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.556871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.557012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.557031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.557270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.557288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.557493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.557511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.557780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.557800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.558091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.558110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.558400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.558419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.558685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.558704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.558838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.558857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.559070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.559088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.559411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.559429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.559641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.559682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.559932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.559951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.560187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.560205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.560420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.560439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.560700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.560722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.561004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.561022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.561369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.561388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.561664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.561683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.561946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.561965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.562192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.562211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.562356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.562375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.562644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.562663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.562952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.562970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.563247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.563265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.563411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.563429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.563567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.563585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.563830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.563849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.564142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.564160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.564432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.564450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.564622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.564641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.564907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.564925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.565265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.565283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.565548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.520 [2024-07-25 12:16:58.565567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.520 qpair failed and we were unable to recover it. 00:30:21.520 [2024-07-25 12:16:58.565889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.565908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.566128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.566146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.566377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.566395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.566626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.566645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.566777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.566796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.567008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.567027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.567290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.567308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.567542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.567560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.567782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.567802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.568008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.568026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.568264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.568282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.568570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.568589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.568842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.568861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.569082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.569100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.569397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.569416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.569652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.569671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.569937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.569956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.570106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.570124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.570343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.570361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.570574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.570592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.570866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.570885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.571083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.571105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.571378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.571541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.571559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.571823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.571842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.572111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.572129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.572419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.572438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.572704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.572723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.572836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.572854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.573051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.573069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.573324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.573342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.573569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.573587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.573750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.573769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.573982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.574000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.574154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.574173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.574383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.574402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.574613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.574633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.574842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.521 [2024-07-25 12:16:58.574860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.521 qpair failed and we were unable to recover it. 00:30:21.521 [2024-07-25 12:16:58.575148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.575167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.575366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.575384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.575520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.575538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.575797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.575817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.576082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.576101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.576317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.576335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.576537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.576556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.576764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.576782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.576943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.576962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.577160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.577178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.577337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.577356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.577481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.577499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.577694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.577713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.577960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.577979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.578250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.578269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.578550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.578568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.578855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.578874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.579074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.579092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.579353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.579372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.579579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.579598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.579762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.579781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.579976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.579994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.580188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.580206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.580366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.580388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.580677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.580697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.580807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.580825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.581032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.581050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.581257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.581275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.581536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.581555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.581703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.581721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.581954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.581973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.582245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.582266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.582404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.582422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.582642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.582661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.582889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.582908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.583111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.583130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.583341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.522 [2024-07-25 12:16:58.583360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.522 qpair failed and we were unable to recover it. 00:30:21.522 [2024-07-25 12:16:58.583634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.583653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.583876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.583895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.584039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.584058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.584160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.584178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.584383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.584402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.584698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.584718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.585006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.585025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.585259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.585277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.585542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.585561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.585824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.585843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.586040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.586059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.586217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.586236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.586392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.586410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.586616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.586636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.586900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.586919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.587125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.587146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.587379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.587401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.587691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.587711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.587917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.587936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.588150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.588170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.588493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.588513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.588665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.588685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.588893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.588913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.589064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.589083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.589377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.589395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.589628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.589647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.589874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.589897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.590132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.590150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.590368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.590386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.590429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.523 [2024-07-25 12:16:58.590489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.523 [2024-07-25 12:16:58.590511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.523 [2024-07-25 12:16:58.590531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.523 [2024-07-25 12:16:58.590546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.523 [2024-07-25 12:16:58.590652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.590671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.590682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:21.523 [2024-07-25 12:16:58.590822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.590840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.590780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:21.523 [2024-07-25 12:16:58.590894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:21.523 [2024-07-25 12:16:58.590899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:21.523 [2024-07-25 12:16:58.591041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.591059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.591374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.591393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.591609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.523 [2024-07-25 12:16:58.591628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.523 qpair failed and we were unable to recover it. 00:30:21.523 [2024-07-25 12:16:58.591839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.591858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.591961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.591980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.592223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.592244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.592460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.592478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.592809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.592829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.593030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.593049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.593314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.593333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.593620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.593639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.593905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.593923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.594217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.594236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.594446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.594465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.594731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.594750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.595020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.595039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.595249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.595268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.595481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.595499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.595758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.595778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.596018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.596037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.596179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.596197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.596348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.596366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.596631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.596651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.596846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.596864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.597071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.597090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.597234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.597253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.597518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.597536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.597652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.597672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.597934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.597952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.598154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.598173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.598406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.598425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.598621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.598640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.598928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.598947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.599229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.524 [2024-07-25 12:16:58.599247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.524 qpair failed and we were unable to recover it. 00:30:21.524 [2024-07-25 12:16:58.599385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.599405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.599736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.599755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.599968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.599988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.600281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.600300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.600521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.600540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.600764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.600783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.601073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.601092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.601303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.601322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.601553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.601572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.601869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.601889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.602090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.602109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.602325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.602351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.602503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.602521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.602730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.602750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.602909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.602928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.603090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.603109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.603314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.603333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.603539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.603559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.603826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.603845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.604124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.604143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.604384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.604403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.604560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.604578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.604804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.604824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.605123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.605141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.605362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.605381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.605533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.605553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.605697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.605717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.605925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.605944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.606099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.606118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.606400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.606419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.606683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.606702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.606958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.606977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.607191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.607210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.607449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.607468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.607730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.607750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.607957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.607976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.608176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.608195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.525 [2024-07-25 12:16:58.608395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.525 [2024-07-25 12:16:58.608416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.525 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.608636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.608655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.608942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.608961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.609176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.609196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.609406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.609425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.609725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.609744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.609951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.609970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.610162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.610181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.610493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.610513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.610668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.610687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.610906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.610925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.611221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.611240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.611539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.611559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.611853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.611873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.612139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.612158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.612390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.612409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.612532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.612551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.612818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.612839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.613051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.613071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.613180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.613199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.613432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.613451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.613613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.613632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.613899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.613918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.614201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.614220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.614360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.614379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.614580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.614599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.614765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.614784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.615000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.615019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.615180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.615199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.615413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.615432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.615577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.615597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.615877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.615896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.616054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.616073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.616229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.616249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.616441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.616459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.616653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.616673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.616876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.616895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.617054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.617075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.526 qpair failed and we were unable to recover it. 00:30:21.526 [2024-07-25 12:16:58.617349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.526 [2024-07-25 12:16:58.617369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.617564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.617584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.617833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.617853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.618078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.618102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.618294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.618313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.618621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.618641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.618883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.618902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.619056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.619076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.619342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.619362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.619632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.619652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.619918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.619937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.620098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.620119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.620343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.620362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.620626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.620646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.620926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.620945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.621240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.621258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.621466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.621485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.621783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.621803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.622073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.622092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.622375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.622395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.622607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.622628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.622891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.622909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.623124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.623143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.623368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.623387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.623588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.623613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.623873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.623892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.624091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.624110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.624317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.624336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.624492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.624510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.624727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.624749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.624956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.624975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.625308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.625328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.625539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.625559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.625849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.625869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.626073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.626094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.626213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.626231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.626494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.626512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.626703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.527 [2024-07-25 12:16:58.626722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.527 qpair failed and we were unable to recover it. 00:30:21.527 [2024-07-25 12:16:58.626920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.626939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.627153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.627172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.627453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.627472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.627679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.627699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.627832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.627851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.628080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.628103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.628342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.628362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.628516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.628536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.628806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.628827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.629119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.629139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.629291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.629310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.629629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.629650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.629913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.629932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.630078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.630097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.630361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.630380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.630587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.630614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.630913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.630932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.631074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.631093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.631405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.631426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.631628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.631648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.631803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.631822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.632087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.632106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.632338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.632357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.632552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.632570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.632783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.632804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.633115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.633135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.633289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.633307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.633538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.633557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.633749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.633769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.633912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.633931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.634194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.634213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.634477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.634496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.634710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.634730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.634973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.634992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.635235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.635492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.635511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.635723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.635744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.528 [2024-07-25 12:16:58.635983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.528 [2024-07-25 12:16:58.636002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.528 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.636264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.636284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.636412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.636430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.636732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.636751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.637019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.637038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.637244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.637262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.637561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.637583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.637870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.637889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.638176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.638199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.638438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.638457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.638749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.638769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.639073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.639093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.639294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.639312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.639630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.639649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.639965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.639983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.640176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.640195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.640484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.640502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.640739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.640759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.641047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.641065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.641217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.641235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.641499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.641517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.641663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.641682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.641894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.641913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.642069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.642088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.642295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.642313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.642538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.642557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.642792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.642811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.643048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.643067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.643362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.643381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.643650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.643669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.643909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.529 [2024-07-25 12:16:58.643927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.529 qpair failed and we were unable to recover it. 00:30:21.529 [2024-07-25 12:16:58.644092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.644110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.644373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.644392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.644608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.644627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.644890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.644908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.645207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.645226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.645352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.645370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.645564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.645582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.645820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.645839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.646050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.646069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.646282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.646300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.646494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.646512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.646789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.646808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.646963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.646981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.647122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.647140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.647372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.647390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.647685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.647704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.647822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.647840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.648032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.648054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.648267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.648285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.648495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.648513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.648808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.648827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.649036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.649054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.649318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.649336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.649588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.649613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.649752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.649771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.649966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.649984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.650133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.650152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.650357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.650375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.650571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.650589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.650805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.650824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.651030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.651049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.651194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.651212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.651480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.651500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.651797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.651817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.652044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.652062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.652256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.652274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.652430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.652449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.530 [2024-07-25 12:16:58.652724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.530 [2024-07-25 12:16:58.652745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.530 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.653037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.653055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.653278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.653298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.653490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.653508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.653796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.653815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.654131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.654150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.654367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.654386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.654669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.654689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.654898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.654917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.655198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.655218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.655449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.655468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.655776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.655796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.656059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.656343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.656362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.656574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.656593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.656836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.656856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.657121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.657141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.657331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.657350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.657486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.657505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.657661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.657680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.657983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.658007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.658221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.658241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.658480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.658501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.658713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.658733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.658945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.658966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.659205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.659224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.659369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.659388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.659543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.659562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.659790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.659812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.659956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.659976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.660180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.660199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.660479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.660499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.660641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.660660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.660816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.660835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.661039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.661058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.661263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.661283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.661555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.661574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.661816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.661835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.531 qpair failed and we were unable to recover it. 00:30:21.531 [2024-07-25 12:16:58.661978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.531 [2024-07-25 12:16:58.661997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.662232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.662252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.662403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.662421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.662564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.662583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.662751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.662771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.662966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.662985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.663191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.663210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.663421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.663439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.663672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.663716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.663956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.663975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.664186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.664205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.664418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.664437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.664708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.664727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.664955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.664974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.665117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.665138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.665371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.665389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.665660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.665679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.665892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.665910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.666150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.666169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.666366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.666384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.666613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.666632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.666860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.666879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.667028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.667050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.667315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.667334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.667537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.667556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.667704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.667725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.668002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.668021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.668211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.668230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.668492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.668511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.668647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.668666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.668955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.668974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.669206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.669226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.669419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.669437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.669661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.669680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.669783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.669802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.669990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.670009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.670145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.670164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.670406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.532 [2024-07-25 12:16:58.670425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.532 qpair failed and we were unable to recover it. 00:30:21.532 [2024-07-25 12:16:58.670789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.670809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.671017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.671036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.671233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.671252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.671533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.671551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.671758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.671777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.672022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.672041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.672251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.672270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.672553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.672571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.672781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.672800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.673096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.673114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.673347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.673365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.673495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.673513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.673725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.673744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.673940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.673958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.674224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.674242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.674391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.674410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.674562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.674580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.674774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.674793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.675028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.675047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.675244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.675262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.675395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.675414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.675626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.675645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.675853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.675872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.676068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.676086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.676323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.676345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.676497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.676516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.676790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.676809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.677083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.677102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.677237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.677256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.677385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.677403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.677628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.677647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.677937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.677956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.678169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.678187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.678452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.678470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.533 qpair failed and we were unable to recover it. 00:30:21.533 [2024-07-25 12:16:58.678678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.533 [2024-07-25 12:16:58.678698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.678960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.678979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.679240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.679259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.679412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.679430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.679657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.679676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.679867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.679886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.680117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.680136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.680355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.680374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.680518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.680536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.680687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.680706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.680917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.680936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.681140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.681158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.681432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.681450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.681664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.681683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.681894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.681912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.682059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.682077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.682338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.682357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.682578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.682596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.682794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.682813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.683087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.683105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.683263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.683282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.683489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.683508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.683803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.683822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.684018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.684037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.684309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.684327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.684556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.684574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.684829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.684848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.685057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.685076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.685282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.685300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.685509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.685528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.685763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.685785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.685983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.686002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.686209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.686228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.686430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.686448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.534 qpair failed and we were unable to recover it. 00:30:21.534 [2024-07-25 12:16:58.686730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.534 [2024-07-25 12:16:58.686749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.686894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.686913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.687111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.687130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.687406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.687424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.687633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.687654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.687816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.687835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.688042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.688060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.688208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.688226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.688489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.688508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.688705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.688724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.688926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.688944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.689159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.689177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.689391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.689410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.689614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.689634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.689923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.689941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.690222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.690241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.690531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.690549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.690777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.690796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.691003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.691022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.691305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.691324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.691538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.691556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.691789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.691808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.692020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.692038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.692252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.692271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.692478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.692496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.692765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.692783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.693003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.693022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.693231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.693249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.693480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.693499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.693815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.693833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.694098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.694117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.694341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.694359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.694591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.694615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.694880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.694898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.695112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.695130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.695419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.695437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.695719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.695741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.695946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.535 [2024-07-25 12:16:58.695964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.535 qpair failed and we were unable to recover it. 00:30:21.535 [2024-07-25 12:16:58.696258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.696276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.696467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.696486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.696717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.696736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.696999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.697017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.697156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.697174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.697444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.697463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.697687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.697706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.697843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.697862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.698150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.698168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.698383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.698401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.698718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.698737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.698932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.698951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.699240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.699258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.699556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.699575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.699822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.699841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.700056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.700074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.700289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.700307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.700515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.700533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.700725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.700745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.701010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.701029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.701268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.701287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.701549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.701568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.701807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.701826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.702034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.702052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.702247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.702266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.702549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.702568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.702691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.702710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.702939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.702957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.703170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.703189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.703408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.703427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.703641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.703660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.703874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.703893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.704082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.704100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.704292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.704310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.704519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.704538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.704762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.704781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.704992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.705011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.536 [2024-07-25 12:16:58.705273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.536 [2024-07-25 12:16:58.705292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.536 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.705555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.705577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.705722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.705741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.705907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.705926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.706217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.706235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.706466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.706484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.706688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.706706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.706849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.706867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.707021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.707039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.707304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.707322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.707471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.707489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.707694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.707713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.707933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.707952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.708159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.708177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.708380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.708398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.708613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.708633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.708953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.708971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.709237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.709255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.709447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.709466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.709676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.709695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.709890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.709908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.710174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.710193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.710401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.710420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.710544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.710562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.710856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.710876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.711167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.711186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.711453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.711471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.711759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.711778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.712025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.712043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.712278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.712297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.712489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.712508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.712804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.712823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.713037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.713055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.713260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.713278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.713488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.713507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.713719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.713738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.714047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.714065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.714339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.714357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.537 qpair failed and we were unable to recover it. 00:30:21.537 [2024-07-25 12:16:58.714511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.537 [2024-07-25 12:16:58.714528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.714868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.714887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.715080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.715098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.715259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.715280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.715547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.715565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.715795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.715813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.716009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.716027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.716187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.716206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.716475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.716494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.716711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.716729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.716922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.716941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.717150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.717168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.717376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.717395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.717687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.717707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.717975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.717993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.718315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.718333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.718639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.718658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.718916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.718935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.719224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.719242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.719501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.719519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.719747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.719766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.720052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.720070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.720290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.720308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.720597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.720621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.720905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.720924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.721146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.721165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.721439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.721457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.721651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.721670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.721863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.721881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.722144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.722162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.722414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.722433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.538 [2024-07-25 12:16:58.722713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.538 [2024-07-25 12:16:58.722732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.538 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.723057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.723075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.723266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.723284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.723492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.723511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.723738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.723756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.724044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.724062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.724341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.724359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.724608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.724627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.724896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.724914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.725228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.725246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.725489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.725508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.725801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.725820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.726140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.726163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.726481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.726499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.726721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.726740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.727030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.727048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.727192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.727211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.727438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.727456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.727745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.727764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.727981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.727999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.728194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.728213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.728445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.728464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.728788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.728806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.729081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.729100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.729390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.729408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.729671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.729690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.729953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.729971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.730185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.730204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.730396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.730414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.730709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.730728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.731013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.731032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.731329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.731347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.731674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.731693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.732008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.732026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.732337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.732355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.732584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.732608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.732875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.732894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.733108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.733127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.539 qpair failed and we were unable to recover it. 00:30:21.539 [2024-07-25 12:16:58.733416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.539 [2024-07-25 12:16:58.733434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.733726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.733745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.733964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.733982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.734184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.734203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.734403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.734421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.734710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.734729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.734873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.734891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.735206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.735224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.735484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.735502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.735818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.735837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.736151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.736169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.736367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.736386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.736663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.736682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.736974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.736993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.737226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.737245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.737541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.737565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.737889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.737908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.738217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.738235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.738550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.738568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.738887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.738906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.739216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.739234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.739475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.739493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.739791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.739810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.740049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.740068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.740331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.740349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.740501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.740519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.740810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.740828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.741085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.741103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.741385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.741404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.741536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.741555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.741842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.741861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.742084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.742103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.742312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.742330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.742632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.742652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.742974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.742993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.743198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.743217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.743508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.743527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.743786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.540 [2024-07-25 12:16:58.743806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.540 qpair failed and we were unable to recover it. 00:30:21.540 [2024-07-25 12:16:58.744083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.744102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.744390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.744408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.744645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.744664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.744926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.744948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.745233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.745251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.745517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.745535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.745731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.745750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.746053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.746071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.746340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.746358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.746589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.746613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.746878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.746896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.747161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.747179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.747471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.747489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.747783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.747802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.748133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.748152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.748376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.748394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.748675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.748694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.748988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.749006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.749336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.749355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.749623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.749642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.749935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.749954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.750151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.750169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.750408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.750426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.750648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.750667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.750937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.750956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.751147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.751166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.751457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.751475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.751711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.751730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.752022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.752040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.752333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.752351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.752599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.752623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.752831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.752849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.753144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.753162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.753490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.753509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.753812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.753831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.754168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.754186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.754483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.754501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.541 [2024-07-25 12:16:58.754792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.541 [2024-07-25 12:16:58.754811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.541 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.755106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.755125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.755422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.755440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.755728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.755748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.756050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.756068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.756363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.756382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.756648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.756670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.756958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.756976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.757265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.757283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.757620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.757639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.757840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.757858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.758122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.758140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.758374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.758392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.758674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.758693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.758997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.759015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.759346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.759364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.759626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.759645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.759836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.759854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.760067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.760085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.760378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.760397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.760667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.760686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.760981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.760998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.761286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.761304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.761620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.761640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.761959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.761977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.762217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.762236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.762524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.762542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.762802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.762821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.763023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.763041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.763251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.763269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.763558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.763576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.763795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.763814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.764035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.764053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.764272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.764291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.764580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.764599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.764929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.764948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.765213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.765231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.765505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.765523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.542 qpair failed and we were unable to recover it. 00:30:21.542 [2024-07-25 12:16:58.765800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.542 [2024-07-25 12:16:58.765821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.766087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.766106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.766299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.766317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.766614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.766633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.766895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.766914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.767182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.767200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.767464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.767482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.767770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.767797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.768009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.768031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.768242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.768261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.768564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.768582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.768794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.768812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.769047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.769065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.769277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.769295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.769565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.769583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.769783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.769802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.770092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.770111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.770375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.770393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.770596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.770622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.770774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.771078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.771097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.771374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.771392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.771608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.771628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.771824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.771842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.772054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.772072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.772378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.772396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.772553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.772571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.772873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.772892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.773157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.773176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.773447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.773465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.773759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.773778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.774118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.543 [2024-07-25 12:16:58.774137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.543 qpair failed and we were unable to recover it. 00:30:21.543 [2024-07-25 12:16:58.774400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.774418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.774630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.774649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.774867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.774885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.775167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.775186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.775450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.775468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.775841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.775860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.776073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.776091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.776387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.776406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.776698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.776717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.776992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.777010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.777321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.777339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.777582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.777600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.777833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.777851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.778115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.778133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.778429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.778448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.778738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.778757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.778973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.778995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.779259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.779278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.779543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.779561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.779851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.779871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.782859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.782879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.783203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.783221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.783535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.783553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.783874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.783893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.784114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.784132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.784467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.784486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.784703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.784722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.784927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.784945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.785230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.785248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.785525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.785544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.785839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.785858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.786137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.786155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.786437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.786456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.786748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.786768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.786975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.786993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.787202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.787220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.787516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.787535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.544 [2024-07-25 12:16:58.787780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.544 [2024-07-25 12:16:58.787801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.544 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.788069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.788088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.788380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.788399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.788625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.788645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.788924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.788942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.789183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.789201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.789471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.789489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.789685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.789704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.789996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.790014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.790253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.790271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.790539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.790793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.790813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.791154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.791172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.791471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.791489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.791706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.791725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.792017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.792036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-07-25 12:16:58.792249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.545 [2024-07-25 12:16:58.792268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.792484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.792503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.792743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.792763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.792998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.793020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.793249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.793268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.793539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.793558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.793876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.793894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.794189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.794208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.794410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.794429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.794724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.794743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.794962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.794980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.795244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.795262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.795474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.795492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.795727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.795746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.795993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.796011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.796275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.796293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.796565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.796584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.796893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.796912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.822 qpair failed and we were unable to recover it. 00:30:21.822 [2024-07-25 12:16:58.797238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.822 [2024-07-25 12:16:58.797256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.797574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.797592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.797917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.797936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.798239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.798257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.798458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.798476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.798669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.798688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.798982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.799000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.799301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.799319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.799614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.799633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.799786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.799805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.800122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.800141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.800437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.800455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.800751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.800770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.801064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.801082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.801349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.801368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.801561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.801579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.801852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.801871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.802167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.802186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.802404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.802422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.802690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.802709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.802997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.803015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.803232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.803250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.803515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.803533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.803801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.803820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.804109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.804127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.804434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.804456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.804774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.804793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.805112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.805130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.805438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.805456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.805660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.805679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.805941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.805960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.806247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.806265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.806551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.806569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.806854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.806873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.807037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.807055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.807341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.807360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.807629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.807648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.807866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.823 [2024-07-25 12:16:58.807884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.823 qpair failed and we were unable to recover it. 00:30:21.823 [2024-07-25 12:16:58.808168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.808186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.808394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.808413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.808673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.808692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.808979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.808998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.809290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.809308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.809613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.809632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.809895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.809914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.810202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.810221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.810432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.810451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.810718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.810737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.811004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.811022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.811307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.811326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.811618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.811637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.811870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.811889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.812138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.812157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.812387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.812405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.812647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.812666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.812932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.812950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.813219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.813237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.813508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.813527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.813790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.813809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.814010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.814028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.814247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.814265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.814553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.814572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.814813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.814831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.815119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.815137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.815434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.815453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.815709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.815731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.815947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.815966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.816262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.816280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.816496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.816515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.816734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.816754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.817026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.817044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.817331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.817350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.817588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.817612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.817877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.817896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.818104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.818122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.824 [2024-07-25 12:16:58.818335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.824 [2024-07-25 12:16:58.818352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.824 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.818647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.818666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.818878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.818897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.819189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.819207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.819502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.819520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.819747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.819766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.819981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.819999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.820290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.820309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.820571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.820590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.820863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.820882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.821088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.821106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.821395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.821415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.821707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.821727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.821951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.821970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.822234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.822253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.822541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.822559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.822802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.822820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.823100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.823118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.823409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.823427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.823765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.823784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.824047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.824065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.824282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.824300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.824491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.824510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.824777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.824795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.824933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.824951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.825162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.825181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.825379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.825397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.825630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.825649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.825884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.825902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.826206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.826225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.826457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.826478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.826755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.826774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.827003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.827021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.827227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.827245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.827468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.827487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.827791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.827810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.828093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.828112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.828399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.825 [2024-07-25 12:16:58.828418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.825 qpair failed and we were unable to recover it. 00:30:21.825 [2024-07-25 12:16:58.828755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.828774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.828990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.829008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.829303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.829322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.829518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.829537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.829837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.829857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.830132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.830151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.830444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.830463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.830748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.830767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.831061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.831080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.831369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.831388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.831649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.831669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.831946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.831965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.832250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.832268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.832481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.832499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.832767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.832786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.833016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.833035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.833307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.833326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.833635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.833655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.833976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.833995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.834201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.834220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.834425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.834444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.834733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.834752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.835026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.835045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.835355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.835374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.835645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.835664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.835871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.835889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.836182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.836200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.836519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.836537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.836850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.836869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.837135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.837154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.837313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.837332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.837628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.837647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.837791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.837815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.838083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.838102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.838309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.838328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.838622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.838641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.838913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.838931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.826 [2024-07-25 12:16:58.839177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.826 [2024-07-25 12:16:58.839196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.826 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.839512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.839531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.839744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.839763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.839914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.839932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.840229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.840247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.840474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.840493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.840816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.840836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.841072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.841090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.841354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.841373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.841571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.841590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.841860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.841878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.842168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.842187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.842481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.842499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.842791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.842810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.843077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.843095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.843391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.843410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.843677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.843696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.844011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.844029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.844376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.844395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.844691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.844710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.844942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.844961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.845088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.845106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.845407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.845425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.845639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.845658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.845813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.845831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.846123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.846141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.846402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.846420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.846627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.846645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.846844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.846863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.847150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.847168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.847389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.827 [2024-07-25 12:16:58.847407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.827 qpair failed and we were unable to recover it. 00:30:21.827 [2024-07-25 12:16:58.847685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.847704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.847848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.847866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.848075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.848093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.848355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.848373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.848662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.848684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.849025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.849043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.849377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.849395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.849690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.849709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.849976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.849995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.850284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.850303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.850597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.850622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.850916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.850935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.851146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.851164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.851384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.851403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.851628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.851648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.851940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.851958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.852249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.852267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.852572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.852590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.852872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.852892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.853184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.853203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.853494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.853512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.853732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.853751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.854075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.854093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.854385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.854404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.854613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.854632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.854824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.854843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.855134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.855152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.855392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.855410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.855650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.855669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.855935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.855954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.856224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.856242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.856440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.856458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.856774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.856793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.857119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.857137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.857439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.857457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.857787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.857806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.858123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.858141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.828 [2024-07-25 12:16:58.858357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.828 [2024-07-25 12:16:58.858375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.828 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.858646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.858665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.858927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.858945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.859211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.859230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.859500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.859518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.859806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.859825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.860040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.860058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.860313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.860335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.860629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.860648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.860886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.860904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.861116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.861134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.861401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.861420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.861686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.861705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.861940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.861958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.862191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.862209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.862432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.862451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.862659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.862678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.862834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.862852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.863140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.863158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.863423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.863441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.863767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.863786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.864105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.864124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.864263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.864281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.864482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.864501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.864785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.864804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.865094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.865112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.865391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.865409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.865635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.865653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.865938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.865956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.866246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.866264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.866541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.866560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.866876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.866895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.867109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.867128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.867447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.867465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.867676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.867696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.867933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.867952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.868270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.868289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.868499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.868517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.829 [2024-07-25 12:16:58.868716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.829 [2024-07-25 12:16:58.868734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.829 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.868951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.868969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.869232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.869251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.869511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.869529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.869793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.869812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.870108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.870126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.870399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.870417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.870684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.870703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.870992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.871010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.871313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.871334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.871651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.871670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.871906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.871924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.872200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.872218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.872374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.872393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.872728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.872747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.872967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.872986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.873211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.873230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.873496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.873515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.873807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.873827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.874101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.874120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.874273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.874291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.874526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.874544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.874863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.874881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.875148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.875167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.875464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.875482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.875717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.875737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.876031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.876049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.876285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.876302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.876517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.876536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.876744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.876763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.877102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.877121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.877414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.877433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.877672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.877691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.877987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.878005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.878270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.878289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.878505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.878523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.878809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.878828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.879120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.879138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.830 [2024-07-25 12:16:58.879416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.830 [2024-07-25 12:16:58.879434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.830 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.879768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.879787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.880095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.880113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.880321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.880339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.880624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.880643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.880847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.880866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.881074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.881092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.881395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.881413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.881658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.881677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.881969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.881987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.882282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.882300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.882513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.882531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.882668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.882688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.882975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.882993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.883226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.883244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.883441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.883459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.883700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.883719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.883930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.883948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.884103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.884122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.884413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.884431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.884726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.884745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.885036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.885053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.885203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.885222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.885509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.885527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.885751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.885770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.885971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.885989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.886275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.886294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.886580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.886599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.886907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.886925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.887244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.887263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.887524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.887542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.887903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.887924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.831 qpair failed and we were unable to recover it. 00:30:21.831 [2024-07-25 12:16:58.888140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.831 [2024-07-25 12:16:58.888158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.888311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.888330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.888566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.888584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.888853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.888872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.889166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.889184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.889472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.889491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.889687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.889710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.890011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.890029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.890315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.890333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.890637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.890656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.890973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.890991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.891198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.891216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.891508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.891527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.891798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.891817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.892083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.892102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.892378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.892397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.892669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.892688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.892981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.892999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.893211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.893229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.893463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.893481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.893786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.893806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.894092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.894110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.894404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.894423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.894647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.894666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.894961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.894979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.895190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.895209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.895528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.895546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.895759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.895778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.896070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.896089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.896301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.896320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.896528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.896547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.896751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.896770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.896992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.897011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.897239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.897257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.897544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.897563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.897878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.897897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.898048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.898067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.898275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.832 [2024-07-25 12:16:58.898294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.832 qpair failed and we were unable to recover it. 00:30:21.832 [2024-07-25 12:16:58.898502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.898520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.898786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.898805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.899010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.899029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.899324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.899341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.899615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.899634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.899943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.899962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.900177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.900195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.900323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.900342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.900553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.900574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.900806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.900826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.901067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.901085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.901294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.901312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.901577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.901596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.901811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.901830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.902114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.902132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.902412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.902430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.902697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.902716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.902937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.902955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.903146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.903165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.903376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.903394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.903692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.903711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.903947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.903965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.904260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.904278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.904502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.904520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.904784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.904803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.905076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.905094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.905327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.905346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.905587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.905609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.905904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.905923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.906157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.906176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.906384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.906403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.906541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.906559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.906849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.906869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.907144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.907162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.907382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.907401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.907678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.907698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.907932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.907949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.908235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.833 [2024-07-25 12:16:58.908253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.833 qpair failed and we were unable to recover it. 00:30:21.833 [2024-07-25 12:16:58.908555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.908573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.908899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.908917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.909236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.909255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.909568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.909587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.909803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.909822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.910088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.910106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.910395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.910413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.910613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.910632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.910915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.910934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.911138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.911157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.911349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.911371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.911608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.911627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.911914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.911932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.912157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.912176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.912440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.912458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.912662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.912681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.912985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.913004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.913321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.913339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.913484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.913502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.913764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.913783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.913997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.914015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.914311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.914330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.914597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.914621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.914828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.914846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.915137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.915156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.915433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.915451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.915742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.915761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.916101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.916120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.916414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.916433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.916748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.916767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.916994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.917012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.917166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.917184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.917337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.917356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.917672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.917691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.917882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.917900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.918104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.918123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.918337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.918356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.834 [2024-07-25 12:16:58.918647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.834 [2024-07-25 12:16:58.918667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.834 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.918930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.918949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.919237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.919254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.919534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.919553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.919842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.919862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.920104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.920122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.920386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.920404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.920616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.920636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.920929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.920947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.921293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.921311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.921612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.921633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.921954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.921973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.922265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.922284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.922490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.922514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.922815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.922834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.923148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.923167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.923448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.923466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.923754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.923773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.924115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.924133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.924348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.924366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.924657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.924677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.924913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.924931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.925226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.925244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.925504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.925522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.925739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.925757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.926047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.926065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.926360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.926379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.926676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.926696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.926924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.926943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.927239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.927257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.927466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.927484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.927686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.927705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.927966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.927984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.928179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.928197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.928497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.928515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.928723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.928742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.929032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.929050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.929312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.929330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.835 [2024-07-25 12:16:58.929622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.835 [2024-07-25 12:16:58.929641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.835 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.929976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.929995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.930237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.930255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.930545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.930563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.930859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.930878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.931221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.931239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.931537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.931556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.931843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.931862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.932205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.932223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.932463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.932482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.932775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.932793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.933058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.933076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.933285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.933303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.933518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.933536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.933825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.933844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.934162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.934183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.934476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.934494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.934710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.934729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.934994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.935012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.935277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.935295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.935561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.935579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.935876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.935895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.936221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.936239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.936555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.936574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.936906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.936925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.937218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.937237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.937503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.937522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.937750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.937769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.938051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.938071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.938412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.938430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.938624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.938643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.938854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.938872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.836 [2024-07-25 12:16:58.939091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.836 [2024-07-25 12:16:58.939109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.836 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.939317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.939336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.939567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.939585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.939827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.939846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.940160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.940179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.940387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.940405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.940699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.940718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.940985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.941003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.941288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.941306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.941529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.941547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.941839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.941858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.942148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.942166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.942503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.942522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.942659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.942678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.942968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.942986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.943219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.943237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.943472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.943491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.943771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.943790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.944004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.944022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.944298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.944317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.944549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.944567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.944867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.944886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.945180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.945198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.945437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.945459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.945694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.945714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.946006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.946025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.946300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.946318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.946638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.946657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.946977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.946995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.947213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.947232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.947492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.947510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.947816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.947835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.948171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.948188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.948380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.948399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.948624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.948642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.948865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.948884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.949173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.949191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.949536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.949555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.837 qpair failed and we were unable to recover it. 00:30:21.837 [2024-07-25 12:16:58.949847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.837 [2024-07-25 12:16:58.949866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.950075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.950093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.950363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.950381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.950647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.950666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.950976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.950994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.951311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.951329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.951561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.951579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.951851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.951870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.952162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.952180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.952516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.952534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.952768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.952786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.953050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.953068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.953336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.953354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.953622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.953641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.953938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.953956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.954244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.954262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.954561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.954580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.954878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.954897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.955109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.955127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.955420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.955439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.955648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.955667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.955934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.955951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.956244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.956263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.956471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.956490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.956761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.956780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.957069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.957091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.957330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.957348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.957503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.957521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.957834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.957853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.958057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.958075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.958283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.958302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.958589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.958613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.958908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.958926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.959219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.959237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.959533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.959551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.959702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.959721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.960037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.960055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.960365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.960384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.838 qpair failed and we were unable to recover it. 00:30:21.838 [2024-07-25 12:16:58.960607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.838 [2024-07-25 12:16:58.960626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.960862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.960881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.961076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.961094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.961378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.961397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.961692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.961711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.961921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.961940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.962187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.962205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.962498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.962516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.962727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.962747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.963065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.963083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.963403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.963421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.963726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.963745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.964063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.964081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.964400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.964418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.964736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.964755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.965067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.965086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.965401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.965419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.965682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.965701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.965970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.965988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.966208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.966227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.966504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.966522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.966800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.966819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.967087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.967104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.967330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.967348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.967624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.967644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.967920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.967938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.968233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.968251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.968589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.968623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.968911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.968929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.969142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.969160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.969433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.969451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.969738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.969757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.970036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.970053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.970259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.970277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.970513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.970532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.970803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.970822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.971134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.971153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.971365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.839 [2024-07-25 12:16:58.971384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.839 qpair failed and we were unable to recover it. 00:30:21.839 [2024-07-25 12:16:58.971644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.971663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.971873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.971891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.972154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.972172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.972472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.972491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.972644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.972663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.972953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.972971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.973162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.973180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.973449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.973467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.973696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.973715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.973925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.973943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.974152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.974170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.974464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.974482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.974763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.974781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.975090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.975108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.975319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.975337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.975613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.975632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.975805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.975824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.976091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.976108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.976321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.976339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.976566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.976585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.976907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.976926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.977134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.977152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.977443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.977461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.977653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.977672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.977963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.977982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.978272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.978291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.978570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.978588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.978844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.978863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.979151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.979170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.979435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.979459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.979733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.979752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.979899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.979917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.980129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.980147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.980380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.980398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.840 [2024-07-25 12:16:58.980538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.840 [2024-07-25 12:16:58.980557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.840 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.980821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.980840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.981044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.981062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.981324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.981342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.981536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.981554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.981877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.981896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.982109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.982127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.982404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.982423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.982620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.982639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.982943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.982962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.983088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.983106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.983399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.983417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.983737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.983756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.984075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.984093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.984399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.984417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.984746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.984765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.985033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.985051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.985190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.985208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.985506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.985525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.985803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.985822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.986115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.986133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.986476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.986494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.986795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.986814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.987104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.987122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.987420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.987438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.987678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.987697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.987993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.988013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.988213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.988231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.988504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.988522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.988732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.988751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.989040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.989058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.989397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.989415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.989647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.989666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.989945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.989964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.990190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.990208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.841 [2024-07-25 12:16:58.990497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.841 [2024-07-25 12:16:58.990519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.841 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.990789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.990808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.991033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.991051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.991331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.991349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.991587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.991619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.991908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.991927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.992218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.992236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.992380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.992398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.992661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.992680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.992924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.992942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.993182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.993200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.993497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.993515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.993722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.993741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.993938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.993956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.994107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.994125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.994415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.994433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.994728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.994747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.995047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.995065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.995328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.995346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.995623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.995642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.995847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.995866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.996087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.996105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.996328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.996346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.996622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.996641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.996934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.996953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.997266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.997285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.997609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.997628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.997840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.997859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.998062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.998081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.998377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.998395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.998537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.998555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.998877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.998896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.999161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.999179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.999455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.999473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:58.999736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:58.999756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:59.000050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:59.000069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:59.000358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:59.000376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:59.000676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:59.000695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.842 [2024-07-25 12:16:59.000984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.842 [2024-07-25 12:16:59.001002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.842 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.001333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.001352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.001566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.001587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.001884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.001903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.002116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.002134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.002370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.002388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.002652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.002671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.002965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.002983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.003189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.003207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.003498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.003517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.003837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.003856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.004173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.004191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.004332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.004351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.004645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.004664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.004999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.005017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.005232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.005251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.005543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.005562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.005865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.005884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.006166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.006184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.006468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.006486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.006833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.006852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.007115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.007133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.007366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.007385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.007599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.007623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.007874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.007893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.008104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.008123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.008443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.008462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.008731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.008750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.008958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.008977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.009274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.009293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.009507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.009525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.009774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.009793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.010014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.010032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.010327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.010345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.010558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.010576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.010792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.010811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.011042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.011060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.011376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.011394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.843 [2024-07-25 12:16:59.011738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.843 [2024-07-25 12:16:59.011758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.843 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.012051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.012070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.012293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.012311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.012599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.012623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.012892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.012914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.013150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.013168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.013364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.013382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.013617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.013637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.013906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.013924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.014162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.014181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.014445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.014464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.014738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.014757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.015049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.015068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.015277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.015296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.015568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.015587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.015796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.015815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.016045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.016063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.016330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.016348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.016566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.016585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.016878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.016898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.017163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.017181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.017487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.017505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.017717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.017737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.018028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.018047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.018320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.018339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.018633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.018653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.018866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.018885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.019085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.019103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.019313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.019331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.019622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.019641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.019914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.019934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.020249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.020268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.020474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.020493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.020620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.020639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.020908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.020927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.021192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.021210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.021364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.021382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.021645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.021664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.844 qpair failed and we were unable to recover it. 00:30:21.844 [2024-07-25 12:16:59.021905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.844 [2024-07-25 12:16:59.021924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.022186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.022204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.022495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.022513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.022735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.022754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.023048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.023067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.023344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.023362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.023677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.023699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.023999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.024017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.024233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.024251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.024516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.024534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.024743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.024762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.024976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.024994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.025238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.025256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.025452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.025471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.025733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.025753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.026044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.026062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.026271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.026290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.026562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.026580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.026794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.026812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.027126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.027144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.027382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.027401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.027629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.027648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.027941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.027959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.028234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.028253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.028494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.028513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.028791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.028810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.029099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.029118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.029322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.029340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.029627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.029646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.029921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.029940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.030219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.030237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.030527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.845 [2024-07-25 12:16:59.030547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.845 qpair failed and we were unable to recover it. 00:30:21.845 [2024-07-25 12:16:59.030881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.030900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.031059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.031081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.031351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.031369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.031662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.031681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.031948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.031966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.032250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.032268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.032554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.032573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.032916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.032934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.033165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.033184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.033402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.033420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.033722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.033741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.033974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.033993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.034283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.034301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.034609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.034629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.034945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.034964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.035374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.035393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.035539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.035557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.035872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.035891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.036139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.036158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.036365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.036383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.036678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.036697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.036972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.036990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.037275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.037294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.037491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.037509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.037657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.037676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.037873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.037892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.038049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.038069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.038295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.038314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.038513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.038531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.038725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.038745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.038949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.038967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.039177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.039196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.039495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.039514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.039718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.039737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.039930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.039948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.040169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.040188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.040410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.040429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.846 qpair failed and we were unable to recover it. 00:30:21.846 [2024-07-25 12:16:59.040829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.846 [2024-07-25 12:16:59.040850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.040988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.041008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.041165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.041184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.041392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.041411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.041612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.041635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.041849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.041868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.042127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.042145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.042354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.042373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.042527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.042546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.042835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.042855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.043066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.043085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.043351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.043369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.043609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.043628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.043851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.043869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.044074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.044093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.044308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.044327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.044616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.044635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.044889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.044908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.045200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.045219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.045364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.045382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.045575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.045593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.045796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.045814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.045970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.045988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.046206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.046226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.046353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.046371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.046564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.046582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.046816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.046836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.047100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.047119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.047266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.047284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.047501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.047520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.047666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.047686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.047924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.047943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.048207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.048225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.048540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.048558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.048778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.048797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.049111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.049130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.049327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.049345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.049636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.049656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.847 qpair failed and we were unable to recover it. 00:30:21.847 [2024-07-25 12:16:59.049868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.847 [2024-07-25 12:16:59.049887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.050041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.050060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.050277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.050296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.050431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.050450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.050691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.050711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.050992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.051011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.051242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.051264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.051391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.051410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.051565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.051583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.051643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bfe80 (9): Bad file descriptor 00:30:21.848 [2024-07-25 12:16:59.051916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.051963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.052221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.052255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.052549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.052581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.052759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.052780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.052991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.053010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.053314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.053332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.053599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.053626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.053892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.053911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.054116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.054134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.054348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.054366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.054687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.054707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.054931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.054950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.055097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.055115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.055252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.055271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.055411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.055430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.055552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.055571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.055730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.055749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.055883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.055902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.056187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.056206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.056414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.056432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.056642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.056662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.056865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.056884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.057089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.057108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.057341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.057363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.057567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.057585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.057854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.057874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.058076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.058095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.058305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.058323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.848 [2024-07-25 12:16:59.058531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.848 [2024-07-25 12:16:59.058550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.848 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.058753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.058772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.059036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.059054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.059316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.059334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.059537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.059555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.059826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.059845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.060120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.060139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.060289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.060308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.060572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.060590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.060741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.060760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.060897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.060916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.061178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.061196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.061483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.061502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.061699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.061718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.061915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.061933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.062072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.062090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.062357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.062375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.062526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.062544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.062772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.062791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.062937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.062956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.063189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.063207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.063336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.063354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.063573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.063592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.063864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.063883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.064096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.064115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.064259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.064277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.064427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.064446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.064733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.064752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.064946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.064964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.065158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.065177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.065416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.065435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.065667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.065686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.065822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.065841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.066059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.066077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.066342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.066361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.066610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.066629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.066848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.066867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.067159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.067178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.849 [2024-07-25 12:16:59.067443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.849 [2024-07-25 12:16:59.067462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.849 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.067735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.067754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.067948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.067966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.068206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.068225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.068366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.068385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.068676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.068695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.068897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.068915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.069174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.069192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.069400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.069419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.069645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.069665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.069869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.069887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.070028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.070046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.070183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.070202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.070431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.070449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.070657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.070676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.070889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.070907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.071042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.071061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.071256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.071274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.071506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.071524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.071758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.071777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.071980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.071998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.072262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.072280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.072410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.072429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.072639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.072658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.072801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.072823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.072946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.072964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.073229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.073247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.073403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.850 [2024-07-25 12:16:59.073422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.850 qpair failed and we were unable to recover it. 00:30:21.850 [2024-07-25 12:16:59.073691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.073710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.073931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.073949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.074104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.074122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.074271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.074289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.074440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.074459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.074669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.074688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.074895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.074913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.075053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.075072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.075271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.075290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.075574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.075593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.075750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.075769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.075912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.075930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.076128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.076146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.076271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.076290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.076439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.076458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.076597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.076623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.077422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.077445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.077753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.077772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.077981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.077999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.078192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.078211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.078421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.078440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.078637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.078657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.078874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.078892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.079004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.079023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.079235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.079254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.079464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.079482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.079697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.079717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.079869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.079887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.080095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.080114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.080257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.080276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.080500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.080519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.080710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.080729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.080936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.080954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.081154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.081173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.081312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.081331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.081551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.081570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.081798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.081824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.851 qpair failed and we were unable to recover it. 00:30:21.851 [2024-07-25 12:16:59.082148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.851 [2024-07-25 12:16:59.082167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.082318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.082336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.082551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.082570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.082768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.082787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.082931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.082949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.083142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.083161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.083357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.083376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.083515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.083534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.083800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.083821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.084020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.084039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.084252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.084270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.084485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.084504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.084685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.084704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.084841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.084860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.084988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.085006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.085211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.085230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.085408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.085427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.085564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.085583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.085728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.085747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.085878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.085897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.086085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.086104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.086300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.086318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.086467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.086486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.086700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.086719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.086828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.086846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.087054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.087072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.087231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.087250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.087594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.087622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.087836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.087855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.088061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.088080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.088318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.088337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.088628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.088648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.088912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.088930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.852 [2024-07-25 12:16:59.089123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.852 [2024-07-25 12:16:59.089141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.852 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.089345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.089363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.089557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.089576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.089777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.089797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.089992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.090011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.090155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.090174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.090384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.090406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.090638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.090658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.090857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.090876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.091072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.091091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.091233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.091253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.091467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.091487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.091792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.091811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.092005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.092023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.092164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.092182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.092464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.092482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.092749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.092769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.092905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.092924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.093060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.093078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.093292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.093310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.093517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.093536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.093674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.093693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.093955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.093974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.094118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.094137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.094346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.094366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.094513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.094532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.094652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.094671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.094861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.094880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.095144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.095163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.095294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.095313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.095445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.095464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.095677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.095696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.095894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.095913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.096180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.096200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.096397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.096415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.096682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.096701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.096971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.096990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.097196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.853 [2024-07-25 12:16:59.097213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.853 qpair failed and we were unable to recover it. 00:30:21.853 [2024-07-25 12:16:59.097427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.097445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.097677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.097696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.097906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.097924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.098069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.098087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.098303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.098321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.098493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.098512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.098736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.098755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.098962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.098982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.099177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.099199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.099407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.099425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.099631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.099651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.099795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.099814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.100018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.100036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.100271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.100289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.100418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.100734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.100754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.100972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.100991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.101147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.101167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.101304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.101323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.101533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.101552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.101699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.101718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.101932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.101951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.102169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.102188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.102316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.102336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.102488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.102506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.102721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.102740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.102884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.102903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.103134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.103152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.103350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.103369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.103507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.103525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.103633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.103651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.854 [2024-07-25 12:16:59.103933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.854 [2024-07-25 12:16:59.103951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.854 qpair failed and we were unable to recover it. 00:30:21.855 [2024-07-25 12:16:59.104242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.855 [2024-07-25 12:16:59.104262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.855 qpair failed and we were unable to recover it. 00:30:21.855 [2024-07-25 12:16:59.104457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.855 [2024-07-25 12:16:59.104476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.855 qpair failed and we were unable to recover it. 00:30:21.855 [2024-07-25 12:16:59.104675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.855 [2024-07-25 12:16:59.104695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.855 qpair failed and we were unable to recover it. 00:30:21.855 [2024-07-25 12:16:59.104901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.855 [2024-07-25 12:16:59.104919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.855 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.105137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.105156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.105424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.105445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.105649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.105668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.105792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.105810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.105950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.105968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.106236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.106255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.106425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.106444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.106647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.106666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.106879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.106898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.107019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.107038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.107175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.107193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.107396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.107414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-07-25 12:16:59.107640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.134 [2024-07-25 12:16:59.107663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.107862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.107881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.108097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.108116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.108310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.108328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.108478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.108496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.108648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.108668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.108822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.108841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.108966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.108985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.109190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.109208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.109411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.109429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.109653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.109672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.109831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.109849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.109977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.109996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.110190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.110208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.110441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.110460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.110656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.110675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.110822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.110840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.111040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.111059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.111194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.111212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.111478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.111497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.111650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.111669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.111809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.111828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.112048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.112067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.112265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.112284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.112432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.112450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.112584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.112621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.112862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.112880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.113092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.113110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.113238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.113256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.113463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.113481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.113763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.113782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.113996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.114014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.114146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.114164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.114354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.114372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.114497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.114515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.114742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.114761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.114992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.115010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.115153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.135 [2024-07-25 12:16:59.115171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-07-25 12:16:59.115375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.115394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.115610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.115629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.115767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.115788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.115995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.116013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.116151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.116169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.116367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.116385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.116656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.116675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.116897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.116915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.117069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.117088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.117361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.117380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.117507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.117525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.117720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.117740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.117951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.117970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.118169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.118188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.118430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.118448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.118782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.118801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.119038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.119057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.119273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.119292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.119427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.119445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.119586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.119609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.119804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.119823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.119968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.119986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.120214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.120233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.120427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.120445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.120568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.120586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.120728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.120747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.120960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.120979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.121135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.121153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.121290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.121308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.121624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.121644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.121903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.121921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.122052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.122071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.122203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.122221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.122456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.122474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.122600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.122626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.122774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.122793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.122989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.123007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.136 [2024-07-25 12:16:59.123163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.136 [2024-07-25 12:16:59.123181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.136 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.123349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.123367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.123656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.123676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.123801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.123819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.124085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.124104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.124256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.124309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.124548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.124566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.124864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.124882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.125087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.125105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.125345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.125363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.125565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.125583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.125734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.125753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.126101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.126119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.126389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.126408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.126680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.126699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.126857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.126875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.127038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.127056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.127318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.127336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.127569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.127587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.127744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.127763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.127977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.127995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.128258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.128277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.128500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.128518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.128667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.128686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.128899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.128917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.129180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.129199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.129464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.129482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.129611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.129630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.129839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.129857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.130050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.130068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.130328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.130347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.130451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.130470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.130629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.130649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.130790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.130808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.130953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.130972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.131093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.131111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.131238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.131256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.137 [2024-07-25 12:16:59.131569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.137 [2024-07-25 12:16:59.131588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.137 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.131753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.131772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.131904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.131923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.132133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.132151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.132287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.132305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.132570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.132588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.132742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.132761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.132957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.132975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.133121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.133143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.133405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.133423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.133647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.133666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.133813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.133830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.134104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.134123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.134317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.134336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.134451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.134469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.134696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.134715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.134861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.134879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.134986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.135004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.135248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.135266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.135402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.135421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.135532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.135551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.135755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.135774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.136040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.136059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.136297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.136315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.136496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.136514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.136661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.136680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.136822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.136840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.137067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.137086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.137219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.137237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.137538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.137557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.137703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.137722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.137846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.137864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.138018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.138036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.138313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.138334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.138541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.138559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.138825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.138895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.139090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.139124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.139361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.139392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:22.138 qpair failed and we were unable to recover it. 00:30:22.138 [2024-07-25 12:16:59.139639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.138 [2024-07-25 12:16:59.139671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.139965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.139995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.140242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bec000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.140429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.140450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.140680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.140699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.140910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.140929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.141064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.141082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.141301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.141319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.141459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.141478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.141636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.141656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.141867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.141889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.142103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.142122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.142253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.142272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.142524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.142542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.142744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.142764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.142887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.142905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.143094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.143113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.143373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.143392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.143614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.143633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.143789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.143808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.144005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.144023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.144157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.144175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.144379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.144398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.144530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.144549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.144789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.144809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.144938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.144957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.145058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.145077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.145374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.145392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.145592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.145617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.145753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.145771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.145968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.139 [2024-07-25 12:16:59.145986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.139 qpair failed and we were unable to recover it. 00:30:22.139 [2024-07-25 12:16:59.146193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.146211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.146347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.146365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.146494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.146513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.146720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.146740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.146891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.146910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.147111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.147129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.147274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.147293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.147524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.147542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.147745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.147764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.147907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.147925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.148119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.148138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.148340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.148359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.148570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.148589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.148861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.148880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.149015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.149260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.149278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.149517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.149536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.149768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.149787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.149997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.150015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.150162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.150184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.150389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.150407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.150621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.150640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.150850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.150869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.151080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.151098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.151222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.151241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.151447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.151466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.151693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.151713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.151846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.151864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.151987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.152005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.152214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.152232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.152373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.152391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.152526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.152545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.152729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.152748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.152991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.153010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.153298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.153317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.153611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.153630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.153792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.153810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.140 [2024-07-25 12:16:59.154023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.140 [2024-07-25 12:16:59.154041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.140 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.154181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.154199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.154326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.154345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.154482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.154500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.154707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.154728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.154929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.154946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.155102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.155120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.155260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.155278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.155408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.155426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.155566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.155586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.155800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.155819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.156041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.156060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.156322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.156342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.156544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.156563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.156778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.156797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.157044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.157063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.157194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.157213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.157427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.157445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.157590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.157615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.157757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.157775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.157927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.157946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.158180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.158199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.158408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.158430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.158589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.158615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.158757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.158776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.158985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.159004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.159159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.159177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.159382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.159401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.159596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.159622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.159748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.159767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.159975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.159994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.160128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.160146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.160282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.160301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.160512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.160531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.160737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.160756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.160898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.160917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.161185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.161204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.141 [2024-07-25 12:16:59.161426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.141 [2024-07-25 12:16:59.161445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.141 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.161590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.161615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.161770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.161789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.162086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.162105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.162229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.162249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.162489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.162508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.162711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.162730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.162839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.162858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.163059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.163078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.163306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.163325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.163469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.163487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.163697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.163717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.163924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.163943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.164093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.164112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.164257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.164276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.164483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.164502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.164752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.164772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.164915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.164933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.165131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.165150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.165440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.165458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.165597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.165624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.165828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.165848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.166138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.166157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.166302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.166321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.166514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.166533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.166692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.166718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.166924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.166942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.167068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.167086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.167235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.167253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.167384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.167403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.167552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.167571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.167737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.167757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.167953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.167973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.168116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.168135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.168325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.168344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.168486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.168505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.168630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.168650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.168806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.168825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.142 qpair failed and we were unable to recover it. 00:30:22.142 [2024-07-25 12:16:59.168954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.142 [2024-07-25 12:16:59.168973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.169118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.169137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.169417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.169436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.169638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.169658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.169879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.169898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.170035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.170054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.170265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.170284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.170486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.170505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.170739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.170758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.170962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.170981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.171164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.171183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.171332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.171351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.171485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.171504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.171669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.171688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.171833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.171853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.171991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.172010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.172138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.172157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.172315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.172333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.172526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.172545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.172744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.172764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.172973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.172991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.173213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.173230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.173373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.173391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.173544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.173563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.173844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.173864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.174070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.174089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.174290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.174309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.174518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.174538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.174680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.174699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.174901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.174921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.175115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.175134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.175282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.175300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.175438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.143 [2024-07-25 12:16:59.175457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.143 qpair failed and we were unable to recover it. 00:30:22.143 [2024-07-25 12:16:59.175630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.175649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.175784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.175804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.175909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.175928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.176122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.176141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.176296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.176315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.176452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.176471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.176600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.176628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.176767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.176786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.176934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.176953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.177249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.177268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.177421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.177440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.177704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.177724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.177876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.177895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.178038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.178056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.178192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.178212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.178341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.178360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.178554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.178573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.178809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.178828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.179112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.179130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.179398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.179417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.179551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.179569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.179708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.179730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.179924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.179942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.180145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.180163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.180310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.180329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.180470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.180489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.180692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.180712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.180855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.180873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.181073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.181091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.181200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.181219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.181346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.181364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.181570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.181589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.181807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.181827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.181967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.181987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.182182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.182201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.182359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.182378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.182513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.182532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.182743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.144 [2024-07-25 12:16:59.182763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.144 qpair failed and we were unable to recover it. 00:30:22.144 [2024-07-25 12:16:59.182908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.182926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.183060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.183078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.183271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.183289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.183493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.183512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.183753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.183773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.183909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.183928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.184059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.184078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.184213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.184232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.184386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.184405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.184608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.184628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.184770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.184788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.185020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.185038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.185175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.185195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.185405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.185424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.185555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.185573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.185712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.185731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.185875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.185893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.186040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.186058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.186195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.186214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.186357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.186375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.186518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.186536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.186669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.186688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.186831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.186849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.187004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.187027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.187169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.187188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.187329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.187347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.187480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.187499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.187791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.187811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.188034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.188053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.188250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.188275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.188408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.188427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.188552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.188571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.188726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.188746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.188953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.188972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.189106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.189125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.189260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.189280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.189411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.189430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.189573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.145 [2024-07-25 12:16:59.189592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.145 qpair failed and we were unable to recover it. 00:30:22.145 [2024-07-25 12:16:59.189731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.189750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.189964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.189983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.190232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.190250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.190450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.190469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.190611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.190630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.190839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.190858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.191051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.191071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.191195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.191214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.191348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.191368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.191570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.191589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.191776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.191795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.191999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.192018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.192217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.192237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.192435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.192454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.192682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.192702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.192832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.192852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.193018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.193037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.193230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.193249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.193398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.193417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.193567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.193587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.193860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.193878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.194016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.194035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.194247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.194266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.194472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.194491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.194630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.194650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.194775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.194797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.194959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.194979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.195190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.195209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.195358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.195377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.195506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.195525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.195752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.195772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.195902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.195920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.196055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.196073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.196203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.196222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.196356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.196375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.196507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.196526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.196665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.196685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.146 [2024-07-25 12:16:59.196855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.146 [2024-07-25 12:16:59.196873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.146 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.197138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.197157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.197425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.197445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.197592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.197619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.197745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.197763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.197963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.197982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.198174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.198193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.198385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.198404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.198671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.198691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.198975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.198993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.199108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.199126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.199323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.199342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.199624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.199643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.199801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.199820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.200082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.200101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.200307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.200326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.200459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.200478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.200679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.200699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.200844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.200863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.201069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.201088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.201242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.201261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.201390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.201409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.201541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.201560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.201770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.201790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.201946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.201965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.202175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.202194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.202414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.202433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.202573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.202591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.202813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.202836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.202965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.202984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.203134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.203152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.203367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.203386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.203577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.203597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.203737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.203756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.203886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.203905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.204037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.204056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.204283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.204301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.204499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.204519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.147 qpair failed and we were unable to recover it. 00:30:22.147 [2024-07-25 12:16:59.204671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.147 [2024-07-25 12:16:59.204692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.204899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.204918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.205134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.205153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.205383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.205402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.205570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.205589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.205796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.205816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.205974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.205992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.206192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.206211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.206432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.206452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.206609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.206628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.206840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.206859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.207092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.207112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.207340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.207358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.207566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.207585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.207793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.207813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.208052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.208070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.208295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.208313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.208467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.208486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.208683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.208703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.208854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.208873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.209001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.209020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.209219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.209238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.209365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.209384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.209592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.209618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.209769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.209787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.209941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.209960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.210169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.210187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.210454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.210473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.210614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.210633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.210845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.210864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.210994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.211017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.211146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.211166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.211368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.148 [2024-07-25 12:16:59.211387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.148 qpair failed and we were unable to recover it. 00:30:22.148 [2024-07-25 12:16:59.211526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.211544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.211649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.211668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.211874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.211893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.212051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.212070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.212204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.212222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.212453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.212472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.212641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.212660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.212783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.212802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.213012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.213031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.213225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.213244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.213446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.213466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.213624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.213644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.213794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.213813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.213953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.213972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.214106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.214125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.214252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.214271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.214475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.214494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.214659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.214678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.214801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.214820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.215046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.215065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.215213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.215233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.215358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.215377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.215589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.215625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.215777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.215795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.216008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.216027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.216234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.216252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.216402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.216421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.216686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.216706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.216830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.216848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.217052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.217071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.217200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.217219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.217415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.217434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.217576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.217595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.217815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.217835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.217986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.218004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.218218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.218237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.218371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.218390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.149 [2024-07-25 12:16:59.218518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.149 [2024-07-25 12:16:59.218540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.149 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.218676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.218696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.218909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.218928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.219127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.219146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.219275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.219294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.219397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.219415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.219616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.219635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.219871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.219891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.220016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.220036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.220179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.220197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.220324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.220343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.220569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.220589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.220776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.220795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.220922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.220940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.221084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.221103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.221262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.221281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.221490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.221508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.221704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.221724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.222024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.222043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.222174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.222192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.222392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.222410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.222540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.222559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.222702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.222721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.222866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.222885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.223030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.223049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.223189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.223208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.223342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.223361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.223489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.223509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.223707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.223727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.223924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.223942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.224071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.224090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.224318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.224337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.224551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.224570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.224737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.224756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.224903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.224922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.225055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.225074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.225291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.225310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.225512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.225531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.150 [2024-07-25 12:16:59.225729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.150 [2024-07-25 12:16:59.225750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.150 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.226017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.226036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.226172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.226194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.226325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.226344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.226557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.226576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.226794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.226814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.227027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.227045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.227176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.227195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.227338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.227357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.227559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.227578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.227781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.227800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.228086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.228105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.228241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.228260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.228371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.228390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.228590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.228616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.228834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.228853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.229093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.229112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.229255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.229273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.229399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.229418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.229564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.229583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.229724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.229743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.229874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.229893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.230100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.230119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.230245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.230264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.230402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.230420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.230551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.230569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.230733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.230752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.231015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.231034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.231289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.231308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.231508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.231527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.231677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.231697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.231832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.231851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.232079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.232098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.232239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.232258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.232508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.232527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.232733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.232752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.232892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.232911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.151 [2024-07-25 12:16:59.233109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.151 [2024-07-25 12:16:59.233128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.151 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.233264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.233283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.233410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.233429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.233536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.233556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.233704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.233723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.233990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.234012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.234168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.234186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.234432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.234451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.234582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.234601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.234869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.234888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.235033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.235052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.235232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.235251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.235371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.235390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.235583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.235610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.235777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.235795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.235992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.236011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.236242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.236261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.236462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.236480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.236749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.236769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.236913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.236932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.237085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.237104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.237316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.237336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.237546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.237564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.237714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.237733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.238030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.238048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.238180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.238200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.238393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.238413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.238630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.238650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.238768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.238787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.238989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.239008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.239134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.239153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.239317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.239336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.239552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.239572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.239731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.239750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.239997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.240016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.240162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.240181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.240374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.240392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.240539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.240558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.152 qpair failed and we were unable to recover it. 00:30:22.152 [2024-07-25 12:16:59.240864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.152 [2024-07-25 12:16:59.240884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.241031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.241050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.241196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.241216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.241420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.241439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.241595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.241621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.241767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.241785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.241928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.241946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.242076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.242098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.242238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.242257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.242469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.242487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.242630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.242649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.242940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.242959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.243093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.243112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.243246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.243264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.243464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.243482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.243747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.243766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.243893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.243911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.244052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.244070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.244231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.244249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.244375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.244394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.244575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.244594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.244809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.244827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.244976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.244995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.245216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.245234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.245434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.245453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.245814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.245834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.246130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.246148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.246283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.246301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.246631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.246651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.246861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.246880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.247172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.247191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.247487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.247506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.153 [2024-07-25 12:16:59.247717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.153 [2024-07-25 12:16:59.247736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.153 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.247946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.247964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.248281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.248300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.248587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.248610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.248846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.248865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.249158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.249177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.249398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.249416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.249701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.249721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.249882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.249900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.250106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.250124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.250318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.250337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.250476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.250494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.250712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.250731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.250889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.250908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.251204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.251223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.251431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.251453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.251780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.251799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.251968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.251986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.252134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.252153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.252452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.252471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.252671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.252691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.252852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.252870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.253164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.253183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.253428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.253446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.253767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.253787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.253999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.254018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.254139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.254157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.254451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.254470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.254622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.254641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.254909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.254927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.255069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.255087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.255308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.255327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.255589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.255614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.255822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.255841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.256187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.256206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.256414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.256433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.256645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.256664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.256877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.154 [2024-07-25 12:16:59.256896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.154 qpair failed and we were unable to recover it. 00:30:22.154 [2024-07-25 12:16:59.257168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.257186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.257376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.257394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.257710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.257739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.258044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.258063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.258218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.258236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.258557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.258576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.258888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.258907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.259124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.259142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.259461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.259480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.259732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.259752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.259944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.259963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.260190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.260208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.260519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.260538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.260780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.260799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.261036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.261055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.261408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.261427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.261691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.261711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.261921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.261943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.262144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.262163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.262522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.262541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.262741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.262760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.263050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.263069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.263316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.263335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.263546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.263564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.263889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.263909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.264125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.264144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.264427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.264445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.264724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.264743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.265006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.265024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.265312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.265331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.265519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.265537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.265841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.265861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.265981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.265999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.266143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.266162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.266308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.266326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.266529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.266548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.266772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.266791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.267018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.155 [2024-07-25 12:16:59.267037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.155 qpair failed and we were unable to recover it. 00:30:22.155 [2024-07-25 12:16:59.267319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.267337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.270212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.270249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.270591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.270619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.270890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.270909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.271195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.271215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.271506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.271525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.271789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.271810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.272052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.272071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.272282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.272301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.272563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.272582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.272784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.272804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.273018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.273037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.273331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.273350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.273611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.273631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.273912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.273931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.274211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.274229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.274434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.274452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.274663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.274681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.275002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.275021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.275362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.275386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.275679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.275698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.275908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.275926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:22.156 [2024-07-25 12:16:59.276135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.276154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.276293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.276312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:22.156 [2024-07-25 12:16:59.276573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.276593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.156 [2024-07-25 12:16:59.276830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.276849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:22.156 [2024-07-25 12:16:59.277058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.277078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.277242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.277262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.156 [2024-07-25 12:16:59.277573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.277592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.277822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.277841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.278064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.278083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.278249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.278267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.278532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.278551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.278757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.278776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.279007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.279025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.156 [2024-07-25 12:16:59.279239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.156 [2024-07-25 12:16:59.279257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.156 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.279548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.279567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.279869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.279888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.280102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.280121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.280326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.280345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.280633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.280653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.280963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.280981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.281196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.281216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.281375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.281394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.281543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.281562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.281832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.281851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.281997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.282016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.282164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.282182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.282389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.282408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.282552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.282571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.282736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.282755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.282960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.282978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.283119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.283138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.283467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.283485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.283804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.283824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.284026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.284044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.284391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.284410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.284641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.284665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.284814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.284833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.284975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.284994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.285234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.285253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.285491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.285510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.285716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.285735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.286050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.286069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.286321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.286340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.286535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.286554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.286772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.286791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.286995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.287014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.157 qpair failed and we were unable to recover it. 00:30:22.157 [2024-07-25 12:16:59.287258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.157 [2024-07-25 12:16:59.287277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.287567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.287586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.287775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.287793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.287956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.287976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.288191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.288211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.288425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.288447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.288812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.288831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.288988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.289007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.289223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.289242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.289547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.289566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.289809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.289828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.290049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.290068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.290339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.290358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.290608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.290628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.290839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.290858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.291075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.291094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.291433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.291452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.291777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.291796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.292036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.292055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.292215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.292235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.292467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.292486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.292755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.292774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.292936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.292955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.293110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.293128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.293431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.293449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.293650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.293669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.293879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.293898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.294056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.294075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.294353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.294372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.294617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.294640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.294786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.294804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.294964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.294983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.295125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.295144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.295309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.295328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.295591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.295619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.295786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.295805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.295944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.295963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.296161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.158 [2024-07-25 12:16:59.296181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.158 qpair failed and we were unable to recover it. 00:30:22.158 [2024-07-25 12:16:59.296558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.296577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.296751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.296770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.296938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.296956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.297114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.297132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.297353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.297371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.297615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.297635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.297794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.297813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.298033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.298052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.298217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.298236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.298499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.298519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.298733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.298752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.298966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.298984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.299198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.299217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.299553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.299573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.299777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.299797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.299950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.299968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.300110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.300128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.300389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.300408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.300571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.300590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.300855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.300874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.301040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.301059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.301411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.301430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.301651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.301671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.301832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.301850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.302063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.302083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.302333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.302352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.302549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.302568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.302802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.302821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.303085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.303103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.303310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.303328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.303593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.303618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.303791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.303813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.303966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.303985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.304216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.304234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.304489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.304508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.304651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.304670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.304886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.304905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.305064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.159 [2024-07-25 12:16:59.305083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.159 qpair failed and we were unable to recover it. 00:30:22.159 [2024-07-25 12:16:59.305210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.305229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.305378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.305396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.305664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.305683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.305947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.305966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.306112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.306131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.306380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.306399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.306650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.306669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.306837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.306856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.307009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.307028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.307178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.307197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.307333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.307352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.307561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.307580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.307737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.307758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.307943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.307962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.308116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.308134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.308434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.308453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.308670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.308690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.308850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.308870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.309027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.309046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.309196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.309215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.309521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.309540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.309746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.309765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.309979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.309998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.310188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.310207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.310502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.310521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.310669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.310688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.310915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.310933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.311088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.311108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.311270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.311288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.311578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.311597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.311754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.311773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.311931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.311949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.312164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.312183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.312338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.312361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.160 [2024-07-25 12:16:59.312568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.312587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 [2024-07-25 12:16:59.312806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.312827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.160 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:22.160 [2024-07-25 12:16:59.313091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.160 [2024-07-25 12:16:59.313110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.160 qpair failed and we were unable to recover it. 00:30:22.161 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.161 [2024-07-25 12:16:59.313259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.313279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.161 [2024-07-25 12:16:59.313636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.313657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.313821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.313840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.314049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.314067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.314308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.314327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.314562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.314582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.314813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.314832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.315075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.315094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.315410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.315430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.315692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.315712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.315974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.315993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.316159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.316177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.316401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.316420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.316580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.316598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.316840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.316859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.317021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.317040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.317202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.317220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.317434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.317452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.317778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.317798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.318064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.318083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.318352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.318370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.318643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.318665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.318955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.318974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.319178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.319196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.319483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.319502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.319750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.319770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.319983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.320002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.320326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.320345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.320646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.320665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.320830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.320848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.321005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.321024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.321180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.321199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.321419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.321437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.321716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.321736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.321901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.321920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.322079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.161 [2024-07-25 12:16:59.322098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.161 qpair failed and we were unable to recover it. 00:30:22.161 [2024-07-25 12:16:59.322411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.322429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.322666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.322685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.322933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.322952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.323099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.323118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.323386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.323405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.323558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.323577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.323815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.323834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.323987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.324005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.324223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.324241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.324503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.324521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.324809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.324829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.325117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.325136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.325459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.325478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.325811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.325831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.325997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.326016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.326228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.326247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.326457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.326475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.326737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.326757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.326920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.326938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.327089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.327108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.327357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.327376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.327574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.327594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.327879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.327898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.328053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.328073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.328286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.328305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.328568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.328590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.328788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.328809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.329041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.329060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.329266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.329285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.329507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.329525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.329762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.329783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.329938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.329957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.330174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.162 [2024-07-25 12:16:59.330193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.162 qpair failed and we were unable to recover it. 00:30:22.162 [2024-07-25 12:16:59.330469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.330487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.330683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.330703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.330955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.330974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.331185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.331205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.331374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.331393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.331694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.331716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.331985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.332005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.332170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.332190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.332345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.332366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.332639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.332661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.332940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.332962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.333116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.333136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.333304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.333323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.333535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.333554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.333777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.333798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.334095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.334113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.334343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.334362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.334557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.334575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.334793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.334813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.334981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.335000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.335158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.335177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.335412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.335431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.335779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.335799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.335997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.336016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.336294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.336313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.336476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.336495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.336635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.336655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.336805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.336824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.337035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.337053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.337260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.337278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.337571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.337590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.337830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.337850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.338045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.338068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.338358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.338377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.338587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.338618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.338831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.338850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.339068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.339087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.339261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.163 [2024-07-25 12:16:59.339280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.163 qpair failed and we were unable to recover it. 00:30:22.163 [2024-07-25 12:16:59.339610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.339629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 Malloc0 00:30:22.164 [2024-07-25 12:16:59.339782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.339802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.340039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.340058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.164 [2024-07-25 12:16:59.340325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.340345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:22.164 [2024-07-25 12:16:59.340617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.340641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.340797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.340815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.164 [2024-07-25 12:16:59.341023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.341048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.164 [2024-07-25 12:16:59.341261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.341283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.341577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.341597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.341844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.341863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.342075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.342094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.342404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.342423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.342699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.342719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.342876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.342894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.343135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.343154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.343378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.343397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.343637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.343656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.343800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.343818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.343983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.344001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.344218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.344237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.344583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.344606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.344825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.344843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.345008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.345026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.345290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.345309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.345573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.345591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.345900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.345919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.346184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.346202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.346410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.346428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.346692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.346711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.346865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.346884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.347107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.347126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.347342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.347361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.347480] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.164 [2024-07-25 12:16:59.347686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.347709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.347988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.348006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.348307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.348325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.164 [2024-07-25 12:16:59.348560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.164 [2024-07-25 12:16:59.348579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.164 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.348748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.348766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.348927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.348945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.349159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.349177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.349401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.349419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.349646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.349666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.349877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.349896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.350171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.350190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.350401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.350420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.350559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.350578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.350906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.350926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.351222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.351587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.351615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.351906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.351924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.352174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.352193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.352514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.352532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.352868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.352887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.353081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.353100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.353327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.353345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.353623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.353642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.353915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.353933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.354249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.354267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.354503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.354521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.354820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.354839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.355209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.355228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.355500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.355520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.355812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.355831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.355985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.356003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.165 [2024-07-25 12:16:59.356270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.356291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.356582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.356609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.165 [2024-07-25 12:16:59.356833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.356852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.165 [2024-07-25 12:16:59.357048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.357069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.165 [2024-07-25 12:16:59.357289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.357310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.357578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.357596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.357848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.357867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.358102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.358125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.165 [2024-07-25 12:16:59.358393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.165 [2024-07-25 12:16:59.358413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.165 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.358685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.358704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.358911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.358930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.359221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.359240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.359480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.359499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.359796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.359815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.360051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.360069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.360280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.360298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.360510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.360529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.360801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.360820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.361112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.361131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.361363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.361381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.361589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.361622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.361838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.361857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.362127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.362145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.362372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.362391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.362534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.362552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.362746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.362766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.363061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.363079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.363231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.363250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.363543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.363561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.363784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.363802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.364116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.364134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.364354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.364373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.364655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.364674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.364892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.364911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.365116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.365135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.365481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.365500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.365710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.365729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.366016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.366034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.366196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.366215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.366426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.366445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.366736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.366756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.366908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.366927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.166 qpair failed and we were unable to recover it. 00:30:22.166 [2024-07-25 12:16:59.367191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.166 [2024-07-25 12:16:59.367209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.367508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.367526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.367768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.367788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.367998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.368017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.368335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.368353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.368695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.167 [2024-07-25 12:16:59.368768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bf4000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.167 [2024-07-25 12:16:59.369092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.369159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.167 [2024-07-25 12:16:59.369485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.369518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1da0 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.369756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.369777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.369950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.369968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.370258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.370277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.370490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.370508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.370835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.370854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.371013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.371032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.371235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.371253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.371462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.371480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.371778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.371797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.372091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.372110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.372311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.372330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.372636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.372656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.372875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.372893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.373137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.373156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.373496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.373514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.373782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.373801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.373943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.373961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.374201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.374220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.374516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.374535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.374738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.374758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.374972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.374990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.375188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.375207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.375438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.375460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.375737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.375756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.375969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.375988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.376194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.376213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.376439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.376457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.167 [2024-07-25 12:16:59.376665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-07-25 12:16:59.376684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.167 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.376937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.376956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.377109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.377127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.377410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.377429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.377695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.377715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.377930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.377948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.378217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.378235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.378461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.378480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.378780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.378800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.378967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.378986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.379204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.379222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.379448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.379467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.379705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.379724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.379881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.379900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.380130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.380148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.168 [2024-07-25 12:16:59.380399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.380418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.380638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.168 [2024-07-25 12:16:59.380658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.380852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.380871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.168 [2024-07-25 12:16:59.381014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.381033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.168 [2024-07-25 12:16:59.381372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.381391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.381636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.381655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.381864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.381882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.382195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.382214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.382429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.382448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.382661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.382681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.382842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.382861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.383107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.383126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.383280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.383298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.383578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.383596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.383878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-07-25 12:16:59.383896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 [2024-07-25 12:16:59.384035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.168 [2024-07-25 12:16:59.388264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.168 [2024-07-25 12:16:59.388403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.168 [2024-07-25 12:16:59.388435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.168 [2024-07-25 12:16:59.388449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.168 [2024-07-25 12:16:59.388461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.168 [2024-07-25 12:16:59.388496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.168 qpair failed and we were unable to recover it. 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.168 12:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 119632 00:30:22.168 [2024-07-25 12:16:59.398168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.168 [2024-07-25 12:16:59.398305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.168 [2024-07-25 12:16:59.398334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.169 [2024-07-25 12:16:59.398347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.169 [2024-07-25 12:16:59.398359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.169 [2024-07-25 12:16:59.398387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.169 qpair failed and we were unable to recover it. 00:30:22.169 [2024-07-25 12:16:59.408226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.169 [2024-07-25 12:16:59.408347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.169 [2024-07-25 12:16:59.408376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.169 [2024-07-25 12:16:59.408389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.169 [2024-07-25 12:16:59.408401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.169 [2024-07-25 12:16:59.408429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.169 qpair failed and we were unable to recover it. 00:30:22.429 [2024-07-25 12:16:59.418406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.429 [2024-07-25 12:16:59.418611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.429 [2024-07-25 12:16:59.418639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.429 [2024-07-25 12:16:59.418653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.429 [2024-07-25 12:16:59.418664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.429 [2024-07-25 12:16:59.418692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.429 qpair failed and we were unable to recover it. 00:30:22.429 [2024-07-25 12:16:59.428246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.429 [2024-07-25 12:16:59.428382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.429 [2024-07-25 12:16:59.428409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.428427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.428438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.428465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.438236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.438360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.438388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.438401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.438412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.438439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.448241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.448401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.448429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.448442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.448453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.448479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.458478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.458637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.458664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.458677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.458689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.458715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.468298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.468419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.468451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.468464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.468475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.468502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.478281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.478429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.478457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.478471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.478482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.478508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.488326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.488475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.488503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.488516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.488527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.488553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.498637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.498789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.498816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.498830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.498841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.498868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.508467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.508594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.508627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.508640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.508651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.508676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.518512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.518676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.518702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.518720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.518731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.518758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.528587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.528716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.528744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.528757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.528769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.528795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.538778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.538933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.538961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.538974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.538985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.539012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.548631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.430 [2024-07-25 12:16:59.548751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.430 [2024-07-25 12:16:59.548783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.430 [2024-07-25 12:16:59.548797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.430 [2024-07-25 12:16:59.548808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.430 [2024-07-25 12:16:59.548834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.430 qpair failed and we were unable to recover it. 00:30:22.430 [2024-07-25 12:16:59.558659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.558782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.558810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.558823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.558834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.558860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.568730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.568853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.568880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.568893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.568904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.568931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.578930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.579114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.579140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.579154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.579166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.579192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.588823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.588949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.588976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.588991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.589002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.589028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.598942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.599080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.599106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.599119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.599129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.599155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.608834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.608954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.608987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.608999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.609011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.609036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.619182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.619335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.619363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.619376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.619387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.619413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.628954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.629078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.629105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.629118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.629129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.629155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.638928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.639048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.639075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.639089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.639100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.639126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.648990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.649144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.649171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.649184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.649195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.649225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.659180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.659326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.659352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.659365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.659376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.659402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.669055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.669185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.669211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.669224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.669235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.669261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.679052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.679171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.679198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.431 [2024-07-25 12:16:59.679211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.431 [2024-07-25 12:16:59.679222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.431 [2024-07-25 12:16:59.679248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.431 qpair failed and we were unable to recover it. 00:30:22.431 [2024-07-25 12:16:59.689096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.431 [2024-07-25 12:16:59.689218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.431 [2024-07-25 12:16:59.689245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.432 [2024-07-25 12:16:59.689258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.432 [2024-07-25 12:16:59.689269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.432 [2024-07-25 12:16:59.689296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.432 qpair failed and we were unable to recover it. 00:30:22.432 [2024-07-25 12:16:59.699268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.432 [2024-07-25 12:16:59.699450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.432 [2024-07-25 12:16:59.699481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.432 [2024-07-25 12:16:59.699495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.432 [2024-07-25 12:16:59.699505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.432 [2024-07-25 12:16:59.699533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.432 qpair failed and we were unable to recover it. 00:30:22.432 [2024-07-25 12:16:59.709041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.432 [2024-07-25 12:16:59.709164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.432 [2024-07-25 12:16:59.709192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.432 [2024-07-25 12:16:59.709206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.432 [2024-07-25 12:16:59.709217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bfc000b90 00:30:22.432 [2024-07-25 12:16:59.709242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.432 qpair failed and we were unable to recover it. 00:30:22.432 [2024-07-25 12:16:59.719178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.432 [2024-07-25 12:16:59.719346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.432 [2024-07-25 12:16:59.719403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.432 [2024-07-25 12:16:59.719429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.432 [2024-07-25 12:16:59.719451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.432 [2024-07-25 12:16:59.719499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.432 qpair failed and we were unable to recover it. 00:30:22.691 [2024-07-25 12:16:59.729198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.729356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.729387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.729402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.729415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.729445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.739409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.739555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.739578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.739589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.739612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.739635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.749318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.749484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.749508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.749520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.749530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.749552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.759301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.759409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.759430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.759440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.759449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.759469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.769300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.769412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.769433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.769443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.769452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.769472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.779529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.779677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.779700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.779710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.779719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.779739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.789300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.789428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.789449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.789460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.789469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.789490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.799378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.799483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.799504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.799514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.799523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.799543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.809478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.809613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.809635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.809645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.809654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.809675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.819661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.819795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.819818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.819828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.819837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.819857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.829509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.829631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.829655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.829666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.829679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.829701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.839498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.692 [2024-07-25 12:16:59.839605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.692 [2024-07-25 12:16:59.839627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.692 [2024-07-25 12:16:59.839637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.692 [2024-07-25 12:16:59.839646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.692 [2024-07-25 12:16:59.839666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.692 qpair failed and we were unable to recover it. 00:30:22.692 [2024-07-25 12:16:59.849545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.849655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.849676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.849687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.849696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.849717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.859716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.859845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.859867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.859877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.859886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.859907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.869560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.869682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.869710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.869720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.869729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.869749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.879655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.879767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.879788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.879799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.879808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.879828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.889688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.889795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.889816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.889826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.889834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.889854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.899936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.900068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.900090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.900100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.900109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.900130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.909750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.909856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.909876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.909885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.909894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.909913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.919795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.919905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.919926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.919940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.919949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.919969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.929838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.929942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.929962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.929972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.929981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.930000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.940057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.940231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.940253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.940262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.940271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.940292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.949926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.950081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.950104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.950114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.950123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.950143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.959939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.960040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.960060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.960070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.960079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.960099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.970015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.693 [2024-07-25 12:16:59.970160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.693 [2024-07-25 12:16:59.970182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.693 [2024-07-25 12:16:59.970192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.693 [2024-07-25 12:16:59.970201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.693 [2024-07-25 12:16:59.970221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.693 qpair failed and we were unable to recover it. 00:30:22.693 [2024-07-25 12:16:59.980188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.694 [2024-07-25 12:16:59.980334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.694 [2024-07-25 12:16:59.980356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.694 [2024-07-25 12:16:59.980366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.694 [2024-07-25 12:16:59.980375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.694 [2024-07-25 12:16:59.980394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.694 qpair failed and we were unable to recover it. 00:30:22.694 [2024-07-25 12:16:59.990091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.694 [2024-07-25 12:16:59.990208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.694 [2024-07-25 12:16:59.990228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.694 [2024-07-25 12:16:59.990238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.694 [2024-07-25 12:16:59.990247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.694 [2024-07-25 12:16:59.990267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.694 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.000062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.000167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.000188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.000198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.000206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.000225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.010076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.010178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.010199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.010213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.010222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.010241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.020252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.020387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.020409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.020419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.020428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.020447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.030114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.030224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.030245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.030255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.030264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.030284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.040204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.040370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.040393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.040403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.040413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.040433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.050246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.050355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.050376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.050386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.050395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.050415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.060420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.060556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.060578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.060588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.060597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.060622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.070306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.070415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.070436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.070446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.070455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.070476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.080331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.080436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.080460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.080470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.080479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.080500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.090371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.090482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.090503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.090513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.090522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.090543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.100559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.100705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.100728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.100742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.954 [2024-07-25 12:17:00.100751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.954 [2024-07-25 12:17:00.100771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-07-25 12:17:00.110421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.954 [2024-07-25 12:17:00.110536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.954 [2024-07-25 12:17:00.110556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.954 [2024-07-25 12:17:00.110567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.110576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.110596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.120458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.120557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.120578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.120587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.120596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.120620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.130508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.130638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.130658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.130668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.130677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.130697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.140720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.140853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.140874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.140885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.140893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.140913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.150540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.150662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.150682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.150693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.150701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.150721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.160589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.160736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.160758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.160768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.160777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.160796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.170587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.170703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.170723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.170733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.170742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.170761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.180866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.180997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.181018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.181028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.181037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.181056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.190613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.190764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.190790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.190800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.190809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.190830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.200716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.200821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.200841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.200852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.200860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.200880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.210668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.210770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.210791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.210800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.210809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.210829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.220993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.221163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.221184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.221194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.221203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.221224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-07-25 12:17:00.230820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.955 [2024-07-25 12:17:00.230939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.955 [2024-07-25 12:17:00.230959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.955 [2024-07-25 12:17:00.230969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.955 [2024-07-25 12:17:00.230979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.955 [2024-07-25 12:17:00.230998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.956 [2024-07-25 12:17:00.240845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.956 [2024-07-25 12:17:00.240955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.956 [2024-07-25 12:17:00.240975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.956 [2024-07-25 12:17:00.240986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.956 [2024-07-25 12:17:00.240995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.956 [2024-07-25 12:17:00.241015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-07-25 12:17:00.250881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.956 [2024-07-25 12:17:00.250989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.956 [2024-07-25 12:17:00.251011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.956 [2024-07-25 12:17:00.251021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.956 [2024-07-25 12:17:00.251030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:22.956 [2024-07-25 12:17:00.251050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.956 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.261153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.261332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.261354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.261364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.261373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.261392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.270936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.271049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.271069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.271079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.271088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.271107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.280987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.281105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.281138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.281150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.281159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.281179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.290923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.291047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.291070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.291081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.291089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.291109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.301175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.301343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.301364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.301375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.301384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.301403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.311053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.311209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.311232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.311242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.311252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.311272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.321107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.321218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.321238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.321249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.321258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.321283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.331145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.215 [2024-07-25 12:17:00.331254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.215 [2024-07-25 12:17:00.331275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.215 [2024-07-25 12:17:00.331286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.215 [2024-07-25 12:17:00.331297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.215 [2024-07-25 12:17:00.331319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.215 qpair failed and we were unable to recover it. 00:30:23.215 [2024-07-25 12:17:00.341305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.341437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.341459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.341469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.341478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.341498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.351201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.351312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.351332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.351343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.351351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.351370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.361233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.361382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.361405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.361416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.361426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.361447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.371201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.371300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.371327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.371339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.371349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.371371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.381469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.381609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.381631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.381643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.381652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.381674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.391337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.391448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.391469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.391480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.391489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.391510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.401396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.401498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.401518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.401528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.401536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.401556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.411398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.411516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.411537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.411548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.411557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.411582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.421653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.421789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.421812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.421822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.421832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.421852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.431476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.431585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.431611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.431622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.431630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.431651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.441477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.441608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.441632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.441643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.441653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.441674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.216 [2024-07-25 12:17:00.451529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.216 [2024-07-25 12:17:00.451637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.216 [2024-07-25 12:17:00.451658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.216 [2024-07-25 12:17:00.451668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.216 [2024-07-25 12:17:00.451678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.216 [2024-07-25 12:17:00.451698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.216 qpair failed and we were unable to recover it. 00:30:23.217 [2024-07-25 12:17:00.461752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.217 [2024-07-25 12:17:00.461892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.217 [2024-07-25 12:17:00.461918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.217 [2024-07-25 12:17:00.461929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.217 [2024-07-25 12:17:00.461939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.217 [2024-07-25 12:17:00.461960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-07-25 12:17:00.471609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.217 [2024-07-25 12:17:00.471719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.217 [2024-07-25 12:17:00.471740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.217 [2024-07-25 12:17:00.471750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.217 [2024-07-25 12:17:00.471759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.217 [2024-07-25 12:17:00.471779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-07-25 12:17:00.481623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.217 [2024-07-25 12:17:00.481736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.217 [2024-07-25 12:17:00.481757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.217 [2024-07-25 12:17:00.481767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.217 [2024-07-25 12:17:00.481777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.217 [2024-07-25 12:17:00.481798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-07-25 12:17:00.491681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.217 [2024-07-25 12:17:00.491865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.217 [2024-07-25 12:17:00.491887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.217 [2024-07-25 12:17:00.491898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.217 [2024-07-25 12:17:00.491907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.217 [2024-07-25 12:17:00.491928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-07-25 12:17:00.501878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.217 [2024-07-25 12:17:00.502011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.217 [2024-07-25 12:17:00.502033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.217 [2024-07-25 12:17:00.502043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.217 [2024-07-25 12:17:00.502057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.217 [2024-07-25 12:17:00.502078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-07-25 12:17:00.511737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.217 [2024-07-25 12:17:00.511844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.217 [2024-07-25 12:17:00.511865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.217 [2024-07-25 12:17:00.511875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.217 [2024-07-25 12:17:00.511884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.217 [2024-07-25 12:17:00.511904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.521721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.521878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.521901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.521913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.521923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.521944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.531852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.531978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.532000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.532010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.532019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.532040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.542011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.542139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.542161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.542171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.542181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.542200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.551951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.552065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.552085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.552096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.552105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.552126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.561855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.561999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.562020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.562030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.562040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.562060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.571957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.572099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.572121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.572132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.572141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.572162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.582159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.582294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.582315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.582326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.582336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.582356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.591994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.592106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.592126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.592136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.592150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.592171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.601954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.602104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.602126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.478 [2024-07-25 12:17:00.602137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.478 [2024-07-25 12:17:00.602147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.478 [2024-07-25 12:17:00.602168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.478 qpair failed and we were unable to recover it. 00:30:23.478 [2024-07-25 12:17:00.612058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.478 [2024-07-25 12:17:00.612168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.478 [2024-07-25 12:17:00.612188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.612199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.612209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.612230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.622229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.622392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.622414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.622424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.622434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.622455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.632128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.632248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.632268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.632279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.632289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.632309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.642122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.642282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.642304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.642315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.642325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.642345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.652137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.652263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.652286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.652298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.652308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.652329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.662473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.662615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.662637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.662647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.662657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.662679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.672202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.672364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.672386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.672396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.672406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.672427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.682222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.682350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.682372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.682383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.682397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.682418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.692316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.692460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.692482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.692492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.692502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.692523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.702497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.702633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.702655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.702666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.702676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.702697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.712388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.712502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.712524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.712535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.712544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.712564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.722361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.722542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.722564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.479 [2024-07-25 12:17:00.722575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.479 [2024-07-25 12:17:00.722584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.479 [2024-07-25 12:17:00.722615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.479 qpair failed and we were unable to recover it. 00:30:23.479 [2024-07-25 12:17:00.732433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.479 [2024-07-25 12:17:00.732544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.479 [2024-07-25 12:17:00.732565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.480 [2024-07-25 12:17:00.732576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.480 [2024-07-25 12:17:00.732586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.480 [2024-07-25 12:17:00.732611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.480 qpair failed and we were unable to recover it. 00:30:23.480 [2024-07-25 12:17:00.742697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.480 [2024-07-25 12:17:00.742830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.480 [2024-07-25 12:17:00.742852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.480 [2024-07-25 12:17:00.742862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.480 [2024-07-25 12:17:00.742872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.480 [2024-07-25 12:17:00.742893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.480 qpair failed and we were unable to recover it. 00:30:23.480 [2024-07-25 12:17:00.752550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.480 [2024-07-25 12:17:00.752680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.480 [2024-07-25 12:17:00.752702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.480 [2024-07-25 12:17:00.752713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.480 [2024-07-25 12:17:00.752723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.480 [2024-07-25 12:17:00.752742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.480 qpair failed and we were unable to recover it. 00:30:23.480 [2024-07-25 12:17:00.762547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.480 [2024-07-25 12:17:00.762710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.480 [2024-07-25 12:17:00.762732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.480 [2024-07-25 12:17:00.762743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.480 [2024-07-25 12:17:00.762753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.480 [2024-07-25 12:17:00.762775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.480 qpair failed and we were unable to recover it. 00:30:23.480 [2024-07-25 12:17:00.772641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.480 [2024-07-25 12:17:00.772763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.480 [2024-07-25 12:17:00.772785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.480 [2024-07-25 12:17:00.772801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.480 [2024-07-25 12:17:00.772811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.480 [2024-07-25 12:17:00.772831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.480 qpair failed and we were unable to recover it. 00:30:23.741 [2024-07-25 12:17:00.782839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.741 [2024-07-25 12:17:00.783015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.741 [2024-07-25 12:17:00.783038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.741 [2024-07-25 12:17:00.783048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.741 [2024-07-25 12:17:00.783058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.741 [2024-07-25 12:17:00.783079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.741 qpair failed and we were unable to recover it. 00:30:23.741 [2024-07-25 12:17:00.792589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.741 [2024-07-25 12:17:00.792704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.741 [2024-07-25 12:17:00.792725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.741 [2024-07-25 12:17:00.792735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.741 [2024-07-25 12:17:00.792744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.741 [2024-07-25 12:17:00.792764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.741 qpair failed and we were unable to recover it. 00:30:23.741 [2024-07-25 12:17:00.802685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.741 [2024-07-25 12:17:00.802852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.741 [2024-07-25 12:17:00.802874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.741 [2024-07-25 12:17:00.802885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.741 [2024-07-25 12:17:00.802894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.741 [2024-07-25 12:17:00.802915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.741 qpair failed and we were unable to recover it. 00:30:23.741 [2024-07-25 12:17:00.812735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.741 [2024-07-25 12:17:00.812839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.741 [2024-07-25 12:17:00.812859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.741 [2024-07-25 12:17:00.812870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.741 [2024-07-25 12:17:00.812879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.741 [2024-07-25 12:17:00.812899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.741 qpair failed and we were unable to recover it. 00:30:23.741 [2024-07-25 12:17:00.822956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.741 [2024-07-25 12:17:00.823092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.741 [2024-07-25 12:17:00.823117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.741 [2024-07-25 12:17:00.823128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.741 [2024-07-25 12:17:00.823138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.741 [2024-07-25 12:17:00.823160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.741 qpair failed and we were unable to recover it. 00:30:23.741 [2024-07-25 12:17:00.832777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.741 [2024-07-25 12:17:00.832897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.741 [2024-07-25 12:17:00.832919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.741 [2024-07-25 12:17:00.832930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.741 [2024-07-25 12:17:00.832940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.832962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.842828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.842928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.842949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.842960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.842969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.842990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.852826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.852978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.853001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.853012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.853021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.853042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.863101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.863234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.863255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.863270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.863280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.863300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.872839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.872950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.872971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.872982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.872992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.873013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.882925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.883035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.883057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.883067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.883077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.883097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.892973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.893088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.893108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.893119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.893129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.893151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.903218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.903365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.903386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.903397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.903406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.903426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.913062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.913178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.913200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.913211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.913221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.913242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.923141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.923246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.923267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.923277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.923287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.923308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.933119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.933239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.933260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.933271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.933280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.933302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.943315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.943457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.943479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.943490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.943500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.742 [2024-07-25 12:17:00.943521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.742 qpair failed and we were unable to recover it. 00:30:23.742 [2024-07-25 12:17:00.953185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.742 [2024-07-25 12:17:00.953337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.742 [2024-07-25 12:17:00.953363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.742 [2024-07-25 12:17:00.953374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.742 [2024-07-25 12:17:00.953384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:00.953405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:00.963288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:00.963440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:00.963463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:00.963474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:00.963483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:00.963505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:00.973220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:00.973325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:00.973347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:00.973358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:00.973368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:00.973388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:00.983503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:00.983642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:00.983663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:00.983674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:00.983685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:00.983707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:00.993332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:00.993443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:00.993464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:00.993474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:00.993483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:00.993504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:01.003344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:01.003445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:01.003466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:01.003477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:01.003486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:01.003507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:01.013408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:01.013527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:01.013547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:01.013557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:01.013566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:01.013587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:01.023666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:01.023841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:01.023863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:01.023873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:01.023883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:01.023904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:23.743 [2024-07-25 12:17:01.033433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.743 [2024-07-25 12:17:01.033545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.743 [2024-07-25 12:17:01.033567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.743 [2024-07-25 12:17:01.033577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.743 [2024-07-25 12:17:01.033587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:23.743 [2024-07-25 12:17:01.033613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:23.743 qpair failed and we were unable to recover it. 00:30:24.003 [2024-07-25 12:17:01.043464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.003 [2024-07-25 12:17:01.043570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.003 [2024-07-25 12:17:01.043599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.003 [2024-07-25 12:17:01.043618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.003 [2024-07-25 12:17:01.043627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.003 [2024-07-25 12:17:01.043648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-07-25 12:17:01.053498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.003 [2024-07-25 12:17:01.053601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.003 [2024-07-25 12:17:01.053627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.003 [2024-07-25 12:17:01.053638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.003 [2024-07-25 12:17:01.053647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.003 [2024-07-25 12:17:01.053666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-07-25 12:17:01.063675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.003 [2024-07-25 12:17:01.063807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.003 [2024-07-25 12:17:01.063829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.003 [2024-07-25 12:17:01.063840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.003 [2024-07-25 12:17:01.063849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.003 [2024-07-25 12:17:01.063869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-07-25 12:17:01.073567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.003 [2024-07-25 12:17:01.073690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.003 [2024-07-25 12:17:01.073711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.003 [2024-07-25 12:17:01.073722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.003 [2024-07-25 12:17:01.073732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.003 [2024-07-25 12:17:01.073752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-07-25 12:17:01.083684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.003 [2024-07-25 12:17:01.083794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.003 [2024-07-25 12:17:01.083814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.003 [2024-07-25 12:17:01.083826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.003 [2024-07-25 12:17:01.083837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.003 [2024-07-25 12:17:01.083863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-07-25 12:17:01.093629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.003 [2024-07-25 12:17:01.093736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.003 [2024-07-25 12:17:01.093757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.003 [2024-07-25 12:17:01.093767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.003 [2024-07-25 12:17:01.093777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.003 [2024-07-25 12:17:01.093797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-07-25 12:17:01.103898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.104030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.104052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.104064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.104073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.104095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.113736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.113854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.113874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.113884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.113894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.113914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.123759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.123869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.123890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.123900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.123910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.123931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.133765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.133863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.133888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.133898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.133907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.133928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.144064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.144202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.144225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.144235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.144245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.144266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.153893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.154012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.154033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.154043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.154053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.154074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.163874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.164007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.164029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.164040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.164050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.164070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.173912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.174056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.174077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.174088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.174097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.174122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.184149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.184301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.184324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.184335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.184344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.184364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.194035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.194175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.194197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.194208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.194218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.194237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.203969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.204076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.204096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.204106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.204115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.204135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.214038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.004 [2024-07-25 12:17:01.214138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.004 [2024-07-25 12:17:01.214159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.004 [2024-07-25 12:17:01.214169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.004 [2024-07-25 12:17:01.214178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.004 [2024-07-25 12:17:01.214198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-07-25 12:17:01.224283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.224428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.224454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.224464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.224474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.224494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-07-25 12:17:01.234104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.234224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.234245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.234256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.234265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.234286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-07-25 12:17:01.244153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.244264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.244285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.244295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.244305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.244326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-07-25 12:17:01.254163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.254261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.254281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.254291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.254300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.254320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-07-25 12:17:01.264448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.264585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.264612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.264624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.264637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.264658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-07-25 12:17:01.274237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.274369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.274392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.274403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.274413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.274433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-07-25 12:17:01.284234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.284339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.284359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.284370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.284380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.284399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-07-25 12:17:01.294309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.005 [2024-07-25 12:17:01.294450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.005 [2024-07-25 12:17:01.294473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.005 [2024-07-25 12:17:01.294484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.005 [2024-07-25 12:17:01.294493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.005 [2024-07-25 12:17:01.294515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.265 [2024-07-25 12:17:01.304514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.304652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.304673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.304684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.304694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.304715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.314398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.314520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.314540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.314552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.314561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.314582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.324392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.324495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.324516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.324527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.324538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.324559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.334432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.334538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.334558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.334568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.334577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.334597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.344692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.344824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.344846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.344857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.344867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.344887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.354555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.354670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.354692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.354703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.354717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.354737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.364546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.364654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.364677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.364689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.364698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.364721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.374581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.374695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.374718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.374729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.374739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.374760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.384750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.384889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.384911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.384922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.384932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.384955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.394638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.394758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.394779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.394790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.394799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.394820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.404672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.404799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.404821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.404832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.404841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.404861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-07-25 12:17:01.414644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.266 [2024-07-25 12:17:01.414757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.266 [2024-07-25 12:17:01.414778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.266 [2024-07-25 12:17:01.414789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.266 [2024-07-25 12:17:01.414798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.266 [2024-07-25 12:17:01.414819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.424947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.425112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.425134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.425145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.425156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.425176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.434803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.434938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.434961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.434972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.434982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.435002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.444829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.444936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.444958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.444969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.444983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.445004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.454865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.454967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.454988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.454998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.455007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.455027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.465089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.465276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.465298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.465309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.465319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.465340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.474909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.475029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.475049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.475060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.475070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.475091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.484944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.485054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.485075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.485086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.485096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.485117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.495019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.495209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.495231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.495242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.495252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.495272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.505175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.505310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.505332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.505344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.505353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.505374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.515041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.515158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.515178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.515189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.515199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.515220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.524994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.525101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.525121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.525132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.525142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.525162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-07-25 12:17:01.535104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.267 [2024-07-25 12:17:01.535236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.267 [2024-07-25 12:17:01.535259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.267 [2024-07-25 12:17:01.535274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.267 [2024-07-25 12:17:01.535283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.267 [2024-07-25 12:17:01.535303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.268 [2024-07-25 12:17:01.545367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.268 [2024-07-25 12:17:01.545523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.268 [2024-07-25 12:17:01.545544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.268 [2024-07-25 12:17:01.545555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.268 [2024-07-25 12:17:01.545565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.268 [2024-07-25 12:17:01.545584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-07-25 12:17:01.555177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.268 [2024-07-25 12:17:01.555303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.268 [2024-07-25 12:17:01.555325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.268 [2024-07-25 12:17:01.555335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.268 [2024-07-25 12:17:01.555345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.268 [2024-07-25 12:17:01.555365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.528 [2024-07-25 12:17:01.565177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.528 [2024-07-25 12:17:01.565340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.528 [2024-07-25 12:17:01.565362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.528 [2024-07-25 12:17:01.565373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.528 [2024-07-25 12:17:01.565382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.528 [2024-07-25 12:17:01.565404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.528 qpair failed and we were unable to recover it. 00:30:24.528 [2024-07-25 12:17:01.575238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.528 [2024-07-25 12:17:01.575340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.528 [2024-07-25 12:17:01.575360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.528 [2024-07-25 12:17:01.575370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.528 [2024-07-25 12:17:01.575380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.528 [2024-07-25 12:17:01.575400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.528 qpair failed and we were unable to recover it. 00:30:24.528 [2024-07-25 12:17:01.585456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.528 [2024-07-25 12:17:01.585592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.528 [2024-07-25 12:17:01.585621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.528 [2024-07-25 12:17:01.585631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.528 [2024-07-25 12:17:01.585641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.528 [2024-07-25 12:17:01.585663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.528 qpair failed and we were unable to recover it. 00:30:24.528 [2024-07-25 12:17:01.595285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.528 [2024-07-25 12:17:01.595401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.528 [2024-07-25 12:17:01.595421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.528 [2024-07-25 12:17:01.595432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.528 [2024-07-25 12:17:01.595441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.528 [2024-07-25 12:17:01.595461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.528 qpair failed and we were unable to recover it. 00:30:24.528 [2024-07-25 12:17:01.605485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.605617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.605641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.605652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.605661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.605681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.615378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.615480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.615501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.615512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.615521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.615540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.625750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.625931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.625953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.625967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.625977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.625998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.635454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.635616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.635637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.635648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.635657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.635677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.645446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.645610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.645631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.645641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.645650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.645671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.655489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.655676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.655697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.655706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.655715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.655737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.665727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.665886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.665907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.665917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.665926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.665947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.675577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.675710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.675732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.675742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.675751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.675771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.685562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.685673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.685693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.685704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.685713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.685734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.695582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.695739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.695760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.695770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.695778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.695798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.705816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.705947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.705967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.705978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.705987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.706007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.529 [2024-07-25 12:17:01.715651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.529 [2024-07-25 12:17:01.715762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.529 [2024-07-25 12:17:01.715783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.529 [2024-07-25 12:17:01.715797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.529 [2024-07-25 12:17:01.715806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.529 [2024-07-25 12:17:01.715827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.529 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.725596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.725716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.725737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.725748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.725757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.725776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.735716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.735818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.735839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.735850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.735858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.735878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.745943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.746073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.746094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.746104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.746113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.746133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.755777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.755890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.755910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.755920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.755929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.755948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.765732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.765841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.765861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.765871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.765880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.765900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.775845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.775956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.775977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.775987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.775997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.776016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.786078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.786250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.786270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.786281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.786289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.786308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.795888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.796032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.796052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.796062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.796070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.796090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.805940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.806050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.806077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.806088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.806097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.806116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.815959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.816068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.816089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.816099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.816109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.816128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.530 [2024-07-25 12:17:01.826216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.530 [2024-07-25 12:17:01.826365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.530 [2024-07-25 12:17:01.826388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.530 [2024-07-25 12:17:01.826398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.530 [2024-07-25 12:17:01.826408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.530 [2024-07-25 12:17:01.826429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.530 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.836023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.836150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.836172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.836182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.836191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.836211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.846087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.846228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.846248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.846258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.846267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.846293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.856093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.856193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.856214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.856224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.856234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.856253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.866343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.866520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.866540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.866550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.866559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.866580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.876098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.876209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.876231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.876241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.876250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.876270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.886192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.886296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.886317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.886327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.886336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.886356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.896206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.896313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.896338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.896349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.896358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.896377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.906480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.906630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.906651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.906662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.791 [2024-07-25 12:17:01.906671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.791 [2024-07-25 12:17:01.906691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.791 qpair failed and we were unable to recover it. 00:30:24.791 [2024-07-25 12:17:01.916314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.791 [2024-07-25 12:17:01.916517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.791 [2024-07-25 12:17:01.916539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.791 [2024-07-25 12:17:01.916549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.916559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.916579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.926339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.926446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.926467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.926477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.926486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.926506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.936463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.936582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.936608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.936619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.936628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.936652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.946677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.946812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.946833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.946843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.946852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.946872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.956471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.956591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.956616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.956628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.956636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.956656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.966498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.966611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.966632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.966643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.966652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.966672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.976588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.976703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.976724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.976734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.976743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.976763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.986742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.986878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.986903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.986913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.986922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.986942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:01.996635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:01.996754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:01.996775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:01.996785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:01.996794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:01.996814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:02.006648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:02.006756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:02.006778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:02.006788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:02.006797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:02.006816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:02.016699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:02.016811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:02.016832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:02.016843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:02.016852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:02.016872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:02.026904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:02.027041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:02.027062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:02.027072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.792 [2024-07-25 12:17:02.027081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.792 [2024-07-25 12:17:02.027104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.792 qpair failed and we were unable to recover it. 00:30:24.792 [2024-07-25 12:17:02.036742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.792 [2024-07-25 12:17:02.036856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.792 [2024-07-25 12:17:02.036877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.792 [2024-07-25 12:17:02.036887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.793 [2024-07-25 12:17:02.036897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.793 [2024-07-25 12:17:02.036916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.793 qpair failed and we were unable to recover it. 00:30:24.793 [2024-07-25 12:17:02.046759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.793 [2024-07-25 12:17:02.046863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.793 [2024-07-25 12:17:02.046883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.793 [2024-07-25 12:17:02.046894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.793 [2024-07-25 12:17:02.046903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.793 [2024-07-25 12:17:02.046922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.793 qpair failed and we were unable to recover it. 00:30:24.793 [2024-07-25 12:17:02.056790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.793 [2024-07-25 12:17:02.056895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.793 [2024-07-25 12:17:02.056916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.793 [2024-07-25 12:17:02.056926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.793 [2024-07-25 12:17:02.056935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.793 [2024-07-25 12:17:02.056954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.793 qpair failed and we were unable to recover it. 00:30:24.793 [2024-07-25 12:17:02.067064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.793 [2024-07-25 12:17:02.067229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.793 [2024-07-25 12:17:02.067250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.793 [2024-07-25 12:17:02.067260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.793 [2024-07-25 12:17:02.067269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.793 [2024-07-25 12:17:02.067288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.793 qpair failed and we were unable to recover it. 00:30:24.793 [2024-07-25 12:17:02.076878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.793 [2024-07-25 12:17:02.076993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.793 [2024-07-25 12:17:02.077018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.793 [2024-07-25 12:17:02.077029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.793 [2024-07-25 12:17:02.077038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.793 [2024-07-25 12:17:02.077058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.793 qpair failed and we were unable to recover it. 00:30:24.793 [2024-07-25 12:17:02.086924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.793 [2024-07-25 12:17:02.087072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.793 [2024-07-25 12:17:02.087093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.793 [2024-07-25 12:17:02.087103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.793 [2024-07-25 12:17:02.087111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:24.793 [2024-07-25 12:17:02.087132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.793 qpair failed and we were unable to recover it. 00:30:25.053 [2024-07-25 12:17:02.096943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.053 [2024-07-25 12:17:02.097046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.053 [2024-07-25 12:17:02.097068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.053 [2024-07-25 12:17:02.097079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.053 [2024-07-25 12:17:02.097089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.053 [2024-07-25 12:17:02.097109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.053 qpair failed and we were unable to recover it. 00:30:25.053 [2024-07-25 12:17:02.107162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.053 [2024-07-25 12:17:02.107343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.053 [2024-07-25 12:17:02.107364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.053 [2024-07-25 12:17:02.107374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.053 [2024-07-25 12:17:02.107383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.053 [2024-07-25 12:17:02.107403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.053 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.117036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.117173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.117193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.117204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.117217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.117237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.127011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.127115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.127138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.127149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.127159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.127180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.136990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.137091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.137112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.137124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.137133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.137152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.147316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.147449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.147470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.147481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.147490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.147510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.157131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.157248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.157269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.157280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.157290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.157309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.167180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.167291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.167312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.167323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.167332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.167351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.177124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.177228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.177249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.177259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.177268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.177287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.187430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.187586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.187611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.187622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.187631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.187651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.197245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.197363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.197383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.197393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.197402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.197422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.207279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.207382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.207403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.207414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.207427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.207447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.217302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.217413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.217436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.217447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.217458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.054 [2024-07-25 12:17:02.217477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.054 qpair failed and we were unable to recover it. 00:30:25.054 [2024-07-25 12:17:02.227581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.054 [2024-07-25 12:17:02.227748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.054 [2024-07-25 12:17:02.227769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.054 [2024-07-25 12:17:02.227780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.054 [2024-07-25 12:17:02.227789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.227809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.237420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.237532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.237553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.237564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.237572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.237592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.247464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.247596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.247621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.247632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.247641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.247661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.257447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.257559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.257580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.257591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.257600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.257626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.267690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.267841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.267861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.267871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.267879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.267899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.277533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.277652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.277672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.277683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.277692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.277711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.287558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.287702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.287722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.287733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.287742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.287762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.297614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.297717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.297738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.297753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.297762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.297781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.307892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.308027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.308048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.308059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.308068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.308087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.317580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.317724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.317745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.317756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.317764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.317784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.327693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.327820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.327844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.327855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.327865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.327885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.337737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.337853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.337874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.337884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.337893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.055 [2024-07-25 12:17:02.337913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.055 qpair failed and we were unable to recover it. 00:30:25.055 [2024-07-25 12:17:02.347937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.055 [2024-07-25 12:17:02.348072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.055 [2024-07-25 12:17:02.348093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.055 [2024-07-25 12:17:02.348104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.055 [2024-07-25 12:17:02.348113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.056 [2024-07-25 12:17:02.348132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.056 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.357814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.357954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.357974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.357984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.357993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.358013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.367793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.367903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.367924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.367935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.367944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.367963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.377889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.378000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.378022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.378032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.378041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.378060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.388028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.388167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.388188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.388203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.388212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.388232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.397936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.398082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.398103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.398113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.398121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.398141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.407951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.408108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.408129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.408139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.408148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.408168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.418067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.418171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.418192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.418203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.418212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.418232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.428184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.428318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.428340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.428350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.428360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.428380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.438079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.438191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.438213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.438224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.438232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.438252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.448029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.448140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.448163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.448173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.448183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.448204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.458118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.316 [2024-07-25 12:17:02.458276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.316 [2024-07-25 12:17:02.458297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.316 [2024-07-25 12:17:02.458307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.316 [2024-07-25 12:17:02.458316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.316 [2024-07-25 12:17:02.458336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.316 qpair failed and we were unable to recover it. 00:30:25.316 [2024-07-25 12:17:02.468330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.468507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.468528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.468539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.468548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.468570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.478161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.478277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.478297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.478403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.478412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.478432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.488207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.488373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.488394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.488405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.488413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.488433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.498178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.498281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.498301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.498311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.498320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.498339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.508473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.508613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.508634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.508645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.508654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.508673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.518400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.518515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.518536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.518547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.518556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.518576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.528370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.528475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.528496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.528507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.528516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.528535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.538344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.538467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.538488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.538498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.538508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.538527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.548621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.548757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.548780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.548791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.548801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.548821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.558383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.558488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.558508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.558519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.558527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.558547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.568521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.568670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.568698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.317 [2024-07-25 12:17:02.568708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.317 [2024-07-25 12:17:02.568717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.317 [2024-07-25 12:17:02.568737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.317 qpair failed and we were unable to recover it. 00:30:25.317 [2024-07-25 12:17:02.578437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.317 [2024-07-25 12:17:02.578548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.317 [2024-07-25 12:17:02.578569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.318 [2024-07-25 12:17:02.578579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.318 [2024-07-25 12:17:02.578588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.318 [2024-07-25 12:17:02.578612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.318 qpair failed and we were unable to recover it. 00:30:25.318 [2024-07-25 12:17:02.588746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.318 [2024-07-25 12:17:02.588903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.318 [2024-07-25 12:17:02.588924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.318 [2024-07-25 12:17:02.588934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.318 [2024-07-25 12:17:02.588943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.318 [2024-07-25 12:17:02.588963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.318 qpair failed and we were unable to recover it. 00:30:25.318 [2024-07-25 12:17:02.598615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.318 [2024-07-25 12:17:02.598721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.318 [2024-07-25 12:17:02.598743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.318 [2024-07-25 12:17:02.598754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.318 [2024-07-25 12:17:02.598763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.318 [2024-07-25 12:17:02.598783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.318 qpair failed and we were unable to recover it. 00:30:25.318 [2024-07-25 12:17:02.608580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.318 [2024-07-25 12:17:02.608706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.318 [2024-07-25 12:17:02.608728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.318 [2024-07-25 12:17:02.608739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.318 [2024-07-25 12:17:02.608748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.318 [2024-07-25 12:17:02.608768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.318 qpair failed and we were unable to recover it. 00:30:25.578 [2024-07-25 12:17:02.618601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.578 [2024-07-25 12:17:02.618724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.578 [2024-07-25 12:17:02.618745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.578 [2024-07-25 12:17:02.618755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.578 [2024-07-25 12:17:02.618764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.578 [2024-07-25 12:17:02.618783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.578 qpair failed and we were unable to recover it. 00:30:25.578 [2024-07-25 12:17:02.628898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.578 [2024-07-25 12:17:02.629078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.578 [2024-07-25 12:17:02.629099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.578 [2024-07-25 12:17:02.629109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.578 [2024-07-25 12:17:02.629118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.578 [2024-07-25 12:17:02.629139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.578 qpair failed and we were unable to recover it. 00:30:25.578 [2024-07-25 12:17:02.638834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.578 [2024-07-25 12:17:02.638953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.578 [2024-07-25 12:17:02.638973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.578 [2024-07-25 12:17:02.638984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.578 [2024-07-25 12:17:02.638993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.578 [2024-07-25 12:17:02.639012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.578 qpair failed and we were unable to recover it. 00:30:25.578 [2024-07-25 12:17:02.648693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.578 [2024-07-25 12:17:02.648794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.578 [2024-07-25 12:17:02.648815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.578 [2024-07-25 12:17:02.648826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.578 [2024-07-25 12:17:02.648835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.578 [2024-07-25 12:17:02.648854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.578 qpair failed and we were unable to recover it. 00:30:25.578 [2024-07-25 12:17:02.658818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.578 [2024-07-25 12:17:02.658943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.578 [2024-07-25 12:17:02.658968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.578 [2024-07-25 12:17:02.658979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.578 [2024-07-25 12:17:02.658987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.578 [2024-07-25 12:17:02.659007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.578 qpair failed and we were unable to recover it. 00:30:25.578 [2024-07-25 12:17:02.669012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.578 [2024-07-25 12:17:02.669143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.578 [2024-07-25 12:17:02.669163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.578 [2024-07-25 12:17:02.669174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.578 [2024-07-25 12:17:02.669183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.578 [2024-07-25 12:17:02.669202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.578 qpair failed and we were unable to recover it. 00:30:25.578 [2024-07-25 12:17:02.678876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.578 [2024-07-25 12:17:02.678991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.679011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.679021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.679030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.679050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.688879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.688993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.689014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.689024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.689033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.689053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.698918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.699016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.699038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.699049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.699057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.699081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.709178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.709317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.709338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.709348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.709357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.709376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.718988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.719101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.719123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.719133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.719142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.719161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.729020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.729122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.729142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.729152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.729161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.729181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.739069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.739216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.739239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.739251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.739261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.739281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.749278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.749448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.749472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.749483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.749492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.749512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.759054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.759162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.759182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.759192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.759201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.759220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.769125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.769227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.769248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.769259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.769268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.769287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.579 [2024-07-25 12:17:02.779192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.579 [2024-07-25 12:17:02.779294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.579 [2024-07-25 12:17:02.779315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.579 [2024-07-25 12:17:02.779326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.579 [2024-07-25 12:17:02.779335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.579 [2024-07-25 12:17:02.779354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.579 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.789413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.789544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.789565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.789575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.789585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.789614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.799246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.799354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.799375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.799385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.799394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.799413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.809313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.809450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.809470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.809481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.809490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.809509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.819294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.819398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.819419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.819429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.819438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.819458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.829529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.829674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.829697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.829708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.829717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.829739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.839278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.839388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.839414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.839424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.839434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.839454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.849451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.849592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.849618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.849630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.849638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.849659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.859409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.580 [2024-07-25 12:17:02.859563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.580 [2024-07-25 12:17:02.859584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.580 [2024-07-25 12:17:02.859595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.580 [2024-07-25 12:17:02.859608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.580 [2024-07-25 12:17:02.859629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.580 qpair failed and we were unable to recover it. 00:30:25.580 [2024-07-25 12:17:02.869662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.581 [2024-07-25 12:17:02.869794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.581 [2024-07-25 12:17:02.869814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.581 [2024-07-25 12:17:02.869824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.581 [2024-07-25 12:17:02.869833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.581 [2024-07-25 12:17:02.869853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.581 qpair failed and we were unable to recover it. 00:30:25.842 [2024-07-25 12:17:02.879469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.842 [2024-07-25 12:17:02.879586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.842 [2024-07-25 12:17:02.879611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.842 [2024-07-25 12:17:02.879622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.842 [2024-07-25 12:17:02.879635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.842 [2024-07-25 12:17:02.879656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.842 qpair failed and we were unable to recover it. 00:30:25.842 [2024-07-25 12:17:02.889487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.842 [2024-07-25 12:17:02.889593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.842 [2024-07-25 12:17:02.889622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.842 [2024-07-25 12:17:02.889633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.842 [2024-07-25 12:17:02.889641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.842 [2024-07-25 12:17:02.889662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.842 qpair failed and we were unable to recover it. 00:30:25.842 [2024-07-25 12:17:02.899557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.842 [2024-07-25 12:17:02.899666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.842 [2024-07-25 12:17:02.899687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.842 [2024-07-25 12:17:02.899697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.842 [2024-07-25 12:17:02.899706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.842 [2024-07-25 12:17:02.899725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.842 qpair failed and we were unable to recover it. 00:30:25.842 [2024-07-25 12:17:02.909717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.842 [2024-07-25 12:17:02.909851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.842 [2024-07-25 12:17:02.909872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.842 [2024-07-25 12:17:02.909882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.842 [2024-07-25 12:17:02.909891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.842 [2024-07-25 12:17:02.909910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.842 qpair failed and we were unable to recover it. 00:30:25.842 [2024-07-25 12:17:02.919642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.842 [2024-07-25 12:17:02.919786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.842 [2024-07-25 12:17:02.919807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.919818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.919826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.919846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.929624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:02.929793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:02.929814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.929825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.929834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.929854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.939690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:02.939804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:02.939824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.939834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.939843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.939862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.949931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:02.950064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:02.950085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.950096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.950105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.950124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.959712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:02.959818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:02.959840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.959850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.959859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.959880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.969750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:02.969859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:02.969880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.969891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.969904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.969924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.979821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:02.979927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:02.979948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.979958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.979967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.979988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.990057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:02.990190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:02.990210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:02.990221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:02.990230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:02.990249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:02.999884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.843 [2024-07-25 12:17:03.000046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.843 [2024-07-25 12:17:03.000067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.843 [2024-07-25 12:17:03.000077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.843 [2024-07-25 12:17:03.000086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.843 [2024-07-25 12:17:03.000106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.843 qpair failed and we were unable to recover it. 00:30:25.843 [2024-07-25 12:17:03.009920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.010020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.010040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.844 [2024-07-25 12:17:03.010051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.844 [2024-07-25 12:17:03.010060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.844 [2024-07-25 12:17:03.010079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.844 qpair failed and we were unable to recover it. 00:30:25.844 [2024-07-25 12:17:03.019912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.020032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.020052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.844 [2024-07-25 12:17:03.020062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.844 [2024-07-25 12:17:03.020071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.844 [2024-07-25 12:17:03.020091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.844 qpair failed and we were unable to recover it. 00:30:25.844 [2024-07-25 12:17:03.030176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.030352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.030372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.844 [2024-07-25 12:17:03.030382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.844 [2024-07-25 12:17:03.030392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.844 [2024-07-25 12:17:03.030412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.844 qpair failed and we were unable to recover it. 00:30:25.844 [2024-07-25 12:17:03.039905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.040014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.040035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.844 [2024-07-25 12:17:03.040046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.844 [2024-07-25 12:17:03.040055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.844 [2024-07-25 12:17:03.040075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.844 qpair failed and we were unable to recover it. 00:30:25.844 [2024-07-25 12:17:03.050023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.050136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.050158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.844 [2024-07-25 12:17:03.050169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.844 [2024-07-25 12:17:03.050178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.844 [2024-07-25 12:17:03.050198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.844 qpair failed and we were unable to recover it. 00:30:25.844 [2024-07-25 12:17:03.060068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.060168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.060188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.844 [2024-07-25 12:17:03.060198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.844 [2024-07-25 12:17:03.060211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.844 [2024-07-25 12:17:03.060230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.844 qpair failed and we were unable to recover it. 00:30:25.844 [2024-07-25 12:17:03.070296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.070432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.070453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.844 [2024-07-25 12:17:03.070463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.844 [2024-07-25 12:17:03.070472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.844 [2024-07-25 12:17:03.070491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.844 qpair failed and we were unable to recover it. 00:30:25.844 [2024-07-25 12:17:03.080121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.844 [2024-07-25 12:17:03.080232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.844 [2024-07-25 12:17:03.080252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.845 [2024-07-25 12:17:03.080263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.845 [2024-07-25 12:17:03.080271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.845 [2024-07-25 12:17:03.080292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.845 qpair failed and we were unable to recover it. 00:30:25.845 [2024-07-25 12:17:03.090157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.845 [2024-07-25 12:17:03.090269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.845 [2024-07-25 12:17:03.090290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.845 [2024-07-25 12:17:03.090300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.845 [2024-07-25 12:17:03.090309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.845 [2024-07-25 12:17:03.090329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.845 qpair failed and we were unable to recover it. 00:30:25.845 [2024-07-25 12:17:03.100185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.845 [2024-07-25 12:17:03.100307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.845 [2024-07-25 12:17:03.100328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.845 [2024-07-25 12:17:03.100338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.845 [2024-07-25 12:17:03.100347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12b1da0 00:30:25.845 [2024-07-25 12:17:03.100367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.845 qpair failed and we were unable to recover it. 00:30:25.845 [2024-07-25 12:17:03.100395] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:25.845 A controller has encountered a failure and is being reset. 00:30:26.105 Controller properly reset. 00:30:27.480 Initializing NVMe Controllers 00:30:27.480 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:27.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:27.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:27.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:27.480 Initialization complete. Launching workers. 00:30:27.480 Starting thread on core 1 00:30:27.480 Starting thread on core 2 00:30:27.480 Starting thread on core 3 00:30:27.480 Starting thread on core 0 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:27.480 00:30:27.480 real 0m11.376s 00:30:27.480 user 0m25.391s 00:30:27.480 sys 0m4.320s 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.480 ************************************ 00:30:27.480 END TEST nvmf_target_disconnect_tc2 00:30:27.480 ************************************ 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.480 rmmod nvme_tcp 00:30:27.480 rmmod nvme_fabrics 00:30:27.480 rmmod nvme_keyring 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 120416 ']' 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 120416 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 120416 ']' 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 120416 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120416 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120416' 00:30:27.480 killing process with pid 120416 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 120416 00:30:27.480 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 120416 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.738 12:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.267 12:17:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:30.267 00:30:30.267 real 0m20.156s 00:30:30.267 user 0m51.964s 00:30:30.267 sys 0m9.328s 00:30:30.267 12:17:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:30.267 12:17:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:30.267 ************************************ 00:30:30.267 END TEST nvmf_target_disconnect 00:30:30.267 ************************************ 00:30:30.267 12:17:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:30.267 00:30:30.267 real 6m20.643s 00:30:30.267 user 12m15.372s 00:30:30.267 sys 1m56.640s 00:30:30.267 12:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:30.267 12:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.267 ************************************ 00:30:30.267 END TEST nvmf_host 00:30:30.267 ************************************ 00:30:30.267 00:30:30.267 real 23m32.476s 00:30:30.267 user 52m4.312s 00:30:30.267 sys 6m48.758s 00:30:30.267 12:17:07 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:30.267 12:17:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.267 ************************************ 00:30:30.268 END TEST nvmf_tcp 00:30:30.268 ************************************ 00:30:30.268 12:17:07 -- spdk/autotest.sh@294 -- # [[ 0 -eq 0 ]] 00:30:30.268 12:17:07 -- spdk/autotest.sh@295 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:30.268 12:17:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:30.268 12:17:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:30.268 12:17:07 -- common/autotest_common.sh@10 -- # set +x 00:30:30.268 ************************************ 00:30:30.268 START TEST spdkcli_nvmf_tcp 00:30:30.268 ************************************ 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:30.268 * Looking for test storage... 00:30:30.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=122132 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 122132 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 122132 ']' 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:30.268 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.268 [2024-07-25 12:17:07.388384] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:30:30.268 [2024-07-25 12:17:07.388441] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122132 ] 00:30:30.268 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.268 [2024-07-25 12:17:07.469735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:30.268 [2024-07-25 12:17:07.559922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.268 [2024-07-25 12:17:07.559928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.526 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:30.526 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:30:30.526 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:30.526 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.527 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.527 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:30.527 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:30.527 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:30.527 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.527 12:17:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.527 12:17:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:30.527 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:30.527 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:30.527 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:30.527 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:30.527 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:30.527 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:30.527 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:30.527 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:30.527 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:30.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:30.527 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:30.527 ' 00:30:33.813 [2024-07-25 12:17:10.432132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.749 [2024-07-25 12:17:11.752945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:37.287 [2024-07-25 12:17:14.217003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:39.192 [2024-07-25 12:17:16.360197] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:41.095 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:41.095 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:41.095 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:41.095 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:41.095 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:41.095 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:41.095 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:41.095 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:41.095 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:41.095 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:41.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:41.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:41.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:41.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:41.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:41.096 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:41.096 12:17:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:41.354 12:17:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:41.354 12:17:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:41.354 12:17:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:41.354 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.354 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.354 12:17:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:41.355 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.355 12:17:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.613 12:17:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:41.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:41.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:41.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:41.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:41.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:41.613 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:41.613 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:41.613 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:41.613 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:41.613 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:41.613 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:41.613 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:41.613 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:41.613 ' 00:30:46.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:46.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:46.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:46.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:46.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:46.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:46.890 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:46.890 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:46.890 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:46.890 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:46.890 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:46.890 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:46.890 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:46.890 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 122132 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 122132 ']' 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 122132 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122132 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122132' 00:30:47.149 killing process with pid 122132 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 122132 00:30:47.149 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 122132 00:30:47.408 12:17:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:47.408 12:17:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:47.408 12:17:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 122132 ']' 00:30:47.408 12:17:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 122132 00:30:47.408 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 122132 ']' 00:30:47.409 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 122132 00:30:47.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (122132) - No such process 00:30:47.409 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 122132 is not found' 00:30:47.409 Process with pid 122132 is not found 00:30:47.409 12:17:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:47.409 12:17:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:47.409 12:17:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:47.409 00:30:47.409 real 0m17.356s 00:30:47.409 user 0m38.217s 00:30:47.409 sys 0m0.990s 00:30:47.409 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:47.409 12:17:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.409 ************************************ 00:30:47.409 END TEST spdkcli_nvmf_tcp 00:30:47.409 ************************************ 00:30:47.409 12:17:24 -- spdk/autotest.sh@296 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:47.409 12:17:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:47.409 12:17:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:47.409 12:17:24 -- common/autotest_common.sh@10 -- # set +x 00:30:47.409 ************************************ 00:30:47.409 START TEST nvmf_identify_passthru 00:30:47.409 ************************************ 00:30:47.409 12:17:24 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:47.668 * Looking for test storage... 00:30:47.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.668 12:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.668 12:17:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.668 12:17:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.668 12:17:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.668 12:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.668 12:17:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.668 12:17:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.668 12:17:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:47.668 12:17:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.668 12:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.668 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.668 12:17:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:47.669 12:17:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.669 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:47.669 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:47.669 12:17:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:47.669 12:17:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:54.233 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:54.233 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.233 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:54.234 Found net devices under 0000:af:00.0: cvl_0_0 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:54.234 Found net devices under 0000:af:00.1: cvl_0_1 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:54.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:30:54.234 00:30:54.234 --- 10.0.0.2 ping statistics --- 00:30:54.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.234 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:30:54.234 00:30:54.234 --- 10.0.0.1 ping statistics --- 00:30:54.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.234 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:54.234 12:17:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:54.234 12:17:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.234 12:17:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:30:54.234 12:17:30 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:86:00.0 00:30:54.234 12:17:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:30:54.234 12:17:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:30:54.234 12:17:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:54.234 12:17:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:54.234 12:17:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:54.234 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.419 12:17:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:30:58.419 12:17:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:58.419 12:17:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:58.419 12:17:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:58.419 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=129842 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 129842 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 129842 ']' 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.672 [2024-07-25 12:17:39.307805] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:31:02.672 [2024-07-25 12:17:39.307865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.672 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.672 [2024-07-25 12:17:39.388449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.672 [2024-07-25 12:17:39.480330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.672 [2024-07-25 12:17:39.480373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.672 [2024-07-25 12:17:39.480383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.672 [2024-07-25 12:17:39.480391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.672 [2024-07-25 12:17:39.480398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.672 [2024-07-25 12:17:39.483627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.672 [2024-07-25 12:17:39.483664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.672 [2024-07-25 12:17:39.483764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.672 [2024-07-25 12:17:39.483766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.672 INFO: Log level set to 20 00:31:02.672 INFO: Requests: 00:31:02.672 { 00:31:02.672 "jsonrpc": "2.0", 00:31:02.672 "method": "nvmf_set_config", 00:31:02.672 "id": 1, 00:31:02.672 "params": { 00:31:02.672 "admin_cmd_passthru": { 00:31:02.672 "identify_ctrlr": true 00:31:02.672 } 00:31:02.672 } 00:31:02.672 } 00:31:02.672 00:31:02.672 INFO: response: 00:31:02.672 { 00:31:02.672 "jsonrpc": "2.0", 00:31:02.672 "id": 1, 00:31:02.672 "result": true 00:31:02.672 } 00:31:02.672 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.672 INFO: Setting log level to 20 00:31:02.672 INFO: Setting log level to 20 00:31:02.672 INFO: Log level set to 20 00:31:02.672 INFO: Log level set to 20 00:31:02.672 INFO: Requests: 00:31:02.672 { 00:31:02.672 "jsonrpc": "2.0", 00:31:02.672 "method": "framework_start_init", 00:31:02.672 "id": 1 00:31:02.672 } 00:31:02.672 00:31:02.672 INFO: Requests: 00:31:02.672 { 00:31:02.672 "jsonrpc": "2.0", 00:31:02.672 "method": "framework_start_init", 00:31:02.672 "id": 1 00:31:02.672 } 00:31:02.672 00:31:02.672 [2024-07-25 12:17:39.645138] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:02.672 INFO: response: 00:31:02.672 { 00:31:02.672 "jsonrpc": "2.0", 00:31:02.672 "id": 1, 00:31:02.672 "result": true 00:31:02.672 } 00:31:02.672 00:31:02.672 INFO: response: 00:31:02.672 { 00:31:02.672 "jsonrpc": "2.0", 00:31:02.672 "id": 1, 00:31:02.672 "result": true 00:31:02.672 } 00:31:02.672 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.672 INFO: Setting log level to 40 00:31:02.672 INFO: Setting log level to 40 00:31:02.672 INFO: Setting log level to 40 00:31:02.672 [2024-07-25 12:17:39.658893] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.672 12:17:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.672 12:17:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:05.958 Nvme0n1 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.958 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.958 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.958 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:05.958 [2024-07-25 12:17:42.593234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.958 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.958 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:05.958 [ 00:31:05.958 { 00:31:05.958 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:05.958 "subtype": "Discovery", 00:31:05.958 "listen_addresses": [], 00:31:05.958 "allow_any_host": true, 00:31:05.958 "hosts": [] 00:31:05.958 }, 00:31:05.958 { 00:31:05.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.958 "subtype": "NVMe", 00:31:05.958 "listen_addresses": [ 00:31:05.958 { 00:31:05.958 "trtype": "TCP", 00:31:05.958 "adrfam": "IPv4", 00:31:05.958 "traddr": "10.0.0.2", 00:31:05.958 "trsvcid": "4420" 00:31:05.958 } 00:31:05.958 ], 00:31:05.958 "allow_any_host": true, 00:31:05.958 "hosts": [], 00:31:05.958 "serial_number": "SPDK00000000000001", 00:31:05.958 "model_number": "SPDK bdev Controller", 00:31:05.958 "max_namespaces": 1, 00:31:05.958 "min_cntlid": 1, 00:31:05.958 "max_cntlid": 65519, 00:31:05.958 "namespaces": [ 00:31:05.958 { 00:31:05.958 "nsid": 1, 00:31:05.959 "bdev_name": "Nvme0n1", 00:31:05.959 "name": "Nvme0n1", 00:31:05.959 "nguid": "E2D417B576914F3E80A972616A0ABA6E", 00:31:05.959 "uuid": "e2d417b5-7691-4f3e-80a9-72616a0aba6e" 00:31:05.959 } 00:31:05.959 ] 00:31:05.959 } 00:31:05.959 ] 00:31:05.959 12:17:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.959 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:05.959 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:05.959 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:05.959 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.959 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:31:05.959 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:05.959 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:05.959 12:17:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:05.959 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.959 12:17:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:05.959 12:17:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:31:05.959 12:17:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:05.959 12:17:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.959 12:17:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:05.959 12:17:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:05.959 rmmod nvme_tcp 00:31:05.959 rmmod nvme_fabrics 00:31:05.959 rmmod nvme_keyring 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 129842 ']' 00:31:05.959 12:17:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 129842 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 129842 ']' 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 129842 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 129842 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 129842' 00:31:05.959 killing process with pid 129842 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 129842 00:31:05.959 12:17:43 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 129842 00:31:07.861 12:17:44 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:07.861 12:17:44 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:07.862 12:17:44 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:07.862 12:17:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:07.862 12:17:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:07.862 12:17:44 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.862 12:17:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:07.862 12:17:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.766 12:17:46 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:09.766 00:31:09.766 real 0m22.238s 00:31:09.766 user 0m28.696s 00:31:09.766 sys 0m5.333s 00:31:09.766 12:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:09.766 12:17:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:09.766 ************************************ 00:31:09.766 END TEST nvmf_identify_passthru 00:31:09.766 ************************************ 00:31:09.766 12:17:46 -- spdk/autotest.sh@298 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:09.766 12:17:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:09.766 12:17:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.766 12:17:46 -- common/autotest_common.sh@10 -- # set +x 00:31:09.766 ************************************ 00:31:09.766 START TEST nvmf_dif 00:31:09.766 ************************************ 00:31:09.766 12:17:46 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:09.766 * Looking for test storage... 00:31:09.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.766 12:17:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.766 12:17:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.767 12:17:47 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.767 12:17:47 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.767 12:17:47 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.767 12:17:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.767 12:17:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.767 12:17:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.767 12:17:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:09.767 12:17:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:09.767 12:17:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:09.767 12:17:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:09.767 12:17:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:09.767 12:17:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:09.767 12:17:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.767 12:17:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:09.767 12:17:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.767 12:17:47 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.767 12:17:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:16.331 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:16.331 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:16.331 Found net devices under 0000:af:00.0: cvl_0_0 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:16.331 Found net devices under 0000:af:00.1: cvl_0_1 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.331 12:17:52 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.332 12:17:52 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.332 12:17:52 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:16.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:31:16.332 00:31:16.332 --- 10.0.0.2 ping statistics --- 00:31:16.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.332 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:31:16.332 12:17:52 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:31:16.332 00:31:16.332 --- 10.0.0.1 ping statistics --- 00:31:16.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.332 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:31:16.332 12:17:52 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.332 12:17:52 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:16.332 12:17:52 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:16.332 12:17:52 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:18.236 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:18.236 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:18.236 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:18.495 12:17:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:18.495 12:17:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=135444 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:18.495 12:17:55 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 135444 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 135444 ']' 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:18.495 12:17:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.495 [2024-07-25 12:17:55.753720] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:31:18.495 [2024-07-25 12:17:55.753777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.495 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.754 [2024-07-25 12:17:55.840955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.754 [2024-07-25 12:17:55.929888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.754 [2024-07-25 12:17:55.929932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.755 [2024-07-25 12:17:55.929942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.755 [2024-07-25 12:17:55.929951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.755 [2024-07-25 12:17:55.929959] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.755 [2024-07-25 12:17:55.929980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:31:19.692 12:17:56 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:19.692 12:17:56 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.692 12:17:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:19.692 12:17:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:19.692 [2024-07-25 12:17:56.731415] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.692 12:17:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:19.692 12:17:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:19.692 ************************************ 00:31:19.692 START TEST fio_dif_1_default 00:31:19.692 ************************************ 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:19.692 bdev_null0 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:19.692 [2024-07-25 12:17:56.803740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.692 { 00:31:19.692 "params": { 00:31:19.692 "name": "Nvme$subsystem", 00:31:19.692 "trtype": "$TEST_TRANSPORT", 00:31:19.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.692 "adrfam": "ipv4", 00:31:19.692 "trsvcid": "$NVMF_PORT", 00:31:19.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.692 "hdgst": ${hdgst:-false}, 00:31:19.692 "ddgst": ${ddgst:-false} 00:31:19.692 }, 00:31:19.692 "method": "bdev_nvme_attach_controller" 00:31:19.692 } 00:31:19.692 EOF 00:31:19.692 )") 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:19.692 "params": { 00:31:19.692 "name": "Nvme0", 00:31:19.692 "trtype": "tcp", 00:31:19.692 "traddr": "10.0.0.2", 00:31:19.692 "adrfam": "ipv4", 00:31:19.692 "trsvcid": "4420", 00:31:19.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.692 "hdgst": false, 00:31:19.692 "ddgst": false 00:31:19.692 }, 00:31:19.692 "method": "bdev_nvme_attach_controller" 00:31:19.692 }' 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:19.692 12:17:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.951 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:19.951 fio-3.35 00:31:19.951 Starting 1 thread 00:31:20.210 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.486 00:31:32.486 filename0: (groupid=0, jobs=1): err= 0: pid=136098: Thu Jul 25 12:18:07 2024 00:31:32.486 read: IOPS=188, BW=756KiB/s (774kB/s)(7568KiB/10014msec) 00:31:32.486 slat (nsec): min=5748, max=90502, avg=12222.70, stdev=2063.02 00:31:32.486 clat (usec): min=716, max=47896, avg=21137.56, stdev=20250.85 00:31:32.486 lat (usec): min=728, max=47919, avg=21149.78, stdev=20250.70 00:31:32.486 clat percentiles (usec): 00:31:32.486 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 824], 20.00th=[ 840], 00:31:32.486 | 30.00th=[ 848], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:31:32.486 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:31:32.486 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:31:32.486 | 99.99th=[47973] 00:31:32.486 bw ( KiB/s): min= 704, max= 768, per=99.90%, avg=755.20, stdev=26.27, samples=20 00:31:32.486 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:31:32.486 lat (usec) : 750=0.16%, 1000=49.42% 00:31:32.486 lat (msec) : 2=0.32%, 50=50.11% 00:31:32.486 cpu : usr=94.53%, sys=5.11%, ctx=18, majf=0, minf=240 00:31:32.486 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.486 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.486 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:32.486 00:31:32.486 Run status group 0 (all jobs): 00:31:32.486 READ: bw=756KiB/s (774kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=7568KiB (7750kB), run=10014-10014msec 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.486 00:31:32.486 real 0m11.268s 00:31:32.486 user 0m20.558s 00:31:32.486 sys 0m0.898s 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:32.486 ************************************ 00:31:32.486 END TEST fio_dif_1_default 00:31:32.486 ************************************ 00:31:32.486 12:18:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:32.486 12:18:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:32.486 12:18:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:32.486 12:18:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:32.486 ************************************ 00:31:32.486 START TEST fio_dif_1_multi_subsystems 00:31:32.486 ************************************ 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.486 bdev_null0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.486 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.487 [2024-07-25 12:18:08.143986] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.487 bdev_null1 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.487 { 00:31:32.487 "params": { 00:31:32.487 "name": "Nvme$subsystem", 00:31:32.487 "trtype": "$TEST_TRANSPORT", 00:31:32.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.487 "adrfam": "ipv4", 00:31:32.487 "trsvcid": "$NVMF_PORT", 00:31:32.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.487 "hdgst": ${hdgst:-false}, 00:31:32.487 "ddgst": ${ddgst:-false} 00:31:32.487 }, 00:31:32.487 "method": "bdev_nvme_attach_controller" 00:31:32.487 } 00:31:32.487 EOF 00:31:32.487 )") 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.487 { 00:31:32.487 "params": { 00:31:32.487 "name": "Nvme$subsystem", 00:31:32.487 "trtype": "$TEST_TRANSPORT", 00:31:32.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.487 "adrfam": "ipv4", 00:31:32.487 "trsvcid": "$NVMF_PORT", 00:31:32.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.487 "hdgst": ${hdgst:-false}, 00:31:32.487 "ddgst": ${ddgst:-false} 00:31:32.487 }, 00:31:32.487 "method": "bdev_nvme_attach_controller" 00:31:32.487 } 00:31:32.487 EOF 00:31:32.487 )") 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:32.487 "params": { 00:31:32.487 "name": "Nvme0", 00:31:32.487 "trtype": "tcp", 00:31:32.487 "traddr": "10.0.0.2", 00:31:32.487 "adrfam": "ipv4", 00:31:32.487 "trsvcid": "4420", 00:31:32.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:32.487 "hdgst": false, 00:31:32.487 "ddgst": false 00:31:32.487 }, 00:31:32.487 "method": "bdev_nvme_attach_controller" 00:31:32.487 },{ 00:31:32.487 "params": { 00:31:32.487 "name": "Nvme1", 00:31:32.487 "trtype": "tcp", 00:31:32.487 "traddr": "10.0.0.2", 00:31:32.487 "adrfam": "ipv4", 00:31:32.487 "trsvcid": "4420", 00:31:32.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:32.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:32.487 "hdgst": false, 00:31:32.487 "ddgst": false 00:31:32.487 }, 00:31:32.487 "method": "bdev_nvme_attach_controller" 00:31:32.487 }' 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:32.487 12:18:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.487 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:32.487 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:32.487 fio-3.35 00:31:32.487 Starting 2 threads 00:31:32.487 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.462 00:31:42.462 filename0: (groupid=0, jobs=1): err= 0: pid=138531: Thu Jul 25 12:18:19 2024 00:31:42.462 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10031msec) 00:31:42.462 slat (nsec): min=10091, max=42269, avg=22539.27, stdev=3592.26 00:31:42.462 clat (usec): min=40864, max=42994, avg=41906.25, stdev=256.00 00:31:42.462 lat (usec): min=40885, max=43023, avg=41928.79, stdev=256.23 00:31:42.462 clat percentiles (usec): 00:31:42.462 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:31:42.462 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:31:42.462 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.462 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:42.462 | 99.99th=[43254] 00:31:42.462 bw ( KiB/s): min= 352, max= 384, per=49.63%, avg=380.80, stdev= 9.85, samples=20 00:31:42.462 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:42.462 lat (msec) : 50=100.00% 00:31:42.462 cpu : usr=97.33%, sys=2.22%, ctx=12, majf=0, minf=102 00:31:42.462 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.462 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.462 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:42.462 filename1: (groupid=0, jobs=1): err= 0: pid=138532: Thu Jul 25 12:18:19 2024 00:31:42.462 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10028msec) 00:31:42.462 slat (nsec): min=8214, max=38620, avg=11773.09, stdev=4049.85 00:31:42.462 clat (usec): min=40825, max=43038, avg=41573.82, stdev=503.33 00:31:42.462 lat (usec): min=40834, max=43055, avg=41585.59, stdev=503.60 00:31:42.462 clat percentiles (usec): 00:31:42.462 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:42.462 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:31:42.462 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.462 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:42.462 | 99.99th=[43254] 00:31:42.462 bw ( KiB/s): min= 384, max= 384, per=50.16%, avg=384.00, stdev= 0.00, samples=20 00:31:42.462 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:31:42.462 lat (msec) : 50=100.00% 00:31:42.462 cpu : usr=97.40%, sys=2.31%, ctx=8, majf=0, minf=162 00:31:42.462 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.462 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.462 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:42.462 00:31:42.462 Run status group 0 (all jobs): 00:31:42.462 READ: bw=766KiB/s (784kB/s), 381KiB/s-385KiB/s (390kB/s-394kB/s), io=7680KiB (7864kB), run=10028-10031msec 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.721 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.721 00:31:42.721 real 0m11.740s 00:31:42.721 user 0m31.991s 00:31:42.721 sys 0m0.815s 00:31:42.722 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:42.722 12:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.722 ************************************ 00:31:42.722 END TEST fio_dif_1_multi_subsystems 00:31:42.722 ************************************ 00:31:42.722 12:18:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:42.722 12:18:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:42.722 12:18:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:42.722 12:18:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.722 ************************************ 00:31:42.722 START TEST fio_dif_rand_params 00:31:42.722 ************************************ 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.722 bdev_null0 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.722 [2024-07-25 12:18:19.960344] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.722 { 00:31:42.722 "params": { 00:31:42.722 "name": "Nvme$subsystem", 00:31:42.722 "trtype": "$TEST_TRANSPORT", 00:31:42.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.722 "adrfam": "ipv4", 00:31:42.722 "trsvcid": "$NVMF_PORT", 00:31:42.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.722 "hdgst": ${hdgst:-false}, 00:31:42.722 "ddgst": ${ddgst:-false} 00:31:42.722 }, 00:31:42.722 "method": "bdev_nvme_attach_controller" 00:31:42.722 } 00:31:42.722 EOF 00:31:42.722 )") 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:42.722 12:18:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:42.722 "params": { 00:31:42.722 "name": "Nvme0", 00:31:42.722 "trtype": "tcp", 00:31:42.722 "traddr": "10.0.0.2", 00:31:42.722 "adrfam": "ipv4", 00:31:42.722 "trsvcid": "4420", 00:31:42.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.722 "hdgst": false, 00:31:42.722 "ddgst": false 00:31:42.722 }, 00:31:42.722 "method": "bdev_nvme_attach_controller" 00:31:42.722 }' 00:31:42.722 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:42.722 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:42.722 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.722 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.722 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:42.722 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:43.010 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:43.010 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:43.010 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:43.010 12:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.278 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:43.278 ... 00:31:43.278 fio-3.35 00:31:43.278 Starting 3 threads 00:31:43.278 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.846 00:31:49.846 filename0: (groupid=0, jobs=1): err= 0: pid=140844: Thu Jul 25 12:18:26 2024 00:31:49.846 read: IOPS=176, BW=22.1MiB/s (23.2MB/s)(111MiB/5042msec) 00:31:49.846 slat (nsec): min=9391, max=42554, avg=16155.35, stdev=5825.64 00:31:49.846 clat (usec): min=6140, max=60144, avg=16953.15, stdev=13795.49 00:31:49.846 lat (usec): min=6153, max=60160, avg=16969.30, stdev=13795.29 00:31:49.846 clat percentiles (usec): 00:31:49.846 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 7832], 20.00th=[ 9765], 00:31:49.846 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12780], 60.00th=[13829], 00:31:49.846 | 70.00th=[14746], 80.00th=[16057], 90.00th=[51119], 95.00th=[53740], 00:31:49.846 | 99.00th=[56886], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:31:49.846 | 99.99th=[60031] 00:31:49.846 bw ( KiB/s): min=15360, max=30720, per=31.12%, avg=22728.20, stdev=5104.02, samples=10 00:31:49.846 iops : min= 120, max= 240, avg=177.50, stdev=39.88, samples=10 00:31:49.846 lat (msec) : 10=23.23%, 20=64.65%, 50=0.45%, 100=11.67% 00:31:49.846 cpu : usr=96.29%, sys=3.33%, ctx=10, majf=0, minf=75 00:31:49.846 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.846 issued rwts: total=891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.846 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:49.846 filename0: (groupid=0, jobs=1): err= 0: pid=140845: Thu Jul 25 12:18:26 2024 00:31:49.846 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(121MiB/5006msec) 00:31:49.846 slat (nsec): min=9321, max=41403, avg=14902.42, stdev=6094.93 00:31:49.846 clat (usec): min=5830, max=59016, avg=15441.38, stdev=12254.88 00:31:49.846 lat (usec): min=5841, max=59030, avg=15456.29, stdev=12255.04 00:31:49.846 clat percentiles (usec): 00:31:49.846 | 1.00th=[ 6063], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 9372], 00:31:49.846 | 30.00th=[10159], 40.00th=[10945], 50.00th=[12256], 60.00th=[13566], 00:31:49.846 | 70.00th=[14746], 80.00th=[15795], 90.00th=[18744], 95.00th=[52691], 00:31:49.846 | 99.00th=[57410], 99.50th=[57934], 99.90th=[58983], 99.95th=[58983], 00:31:49.846 | 99.99th=[58983] 00:31:49.846 bw ( KiB/s): min=17664, max=34560, per=33.96%, avg=24806.40, stdev=5628.70, samples=10 00:31:49.846 iops : min= 138, max= 270, avg=193.80, stdev=43.97, samples=10 00:31:49.846 lat (msec) : 10=27.70%, 20=63.34%, 50=0.72%, 100=8.24% 00:31:49.846 cpu : usr=96.40%, sys=3.22%, ctx=9, majf=0, minf=128 00:31:49.846 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.846 issued rwts: total=971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.846 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:49.846 filename0: (groupid=0, jobs=1): err= 0: pid=140846: Thu Jul 25 12:18:26 2024 00:31:49.846 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(127MiB/5047msec) 00:31:49.846 slat (nsec): min=9372, max=46241, avg=15825.66, stdev=6454.83 00:31:49.846 clat (usec): min=5490, max=92178, avg=14770.56, stdev=12680.91 00:31:49.846 lat (usec): min=5501, max=92194, avg=14786.39, stdev=12681.35 00:31:49.846 clat percentiles (usec): 00:31:49.846 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 7832], 20.00th=[ 8848], 00:31:49.846 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[11338], 60.00th=[11994], 00:31:49.846 | 70.00th=[12649], 80.00th=[13435], 90.00th=[46924], 95.00th=[51643], 00:31:49.846 | 99.00th=[54264], 99.50th=[54264], 99.90th=[56361], 99.95th=[91751], 00:31:49.846 | 99.99th=[91751] 00:31:49.846 bw ( KiB/s): min=19968, max=36937, per=35.58%, avg=25985.40, stdev=5588.29, samples=10 00:31:49.846 iops : min= 156, max= 288, avg=202.90, stdev=43.50, samples=10 00:31:49.846 lat (msec) : 10=39.00%, 20=50.98%, 50=2.46%, 100=7.56% 00:31:49.846 cpu : usr=96.23%, sys=3.41%, ctx=12, majf=0, minf=134 00:31:49.846 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.846 issued rwts: total=1018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.846 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:49.846 00:31:49.846 Run status group 0 (all jobs): 00:31:49.846 READ: bw=71.3MiB/s (74.8MB/s), 22.1MiB/s-25.2MiB/s (23.2MB/s-26.4MB/s), io=360MiB (377MB), run=5006-5047msec 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.846 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.846 bdev_null0 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 [2024-07-25 12:18:26.291206] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 bdev_null1 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 bdev_null2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:49.847 { 00:31:49.847 "params": { 00:31:49.847 "name": "Nvme$subsystem", 00:31:49.847 "trtype": "$TEST_TRANSPORT", 00:31:49.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.847 "adrfam": "ipv4", 00:31:49.847 "trsvcid": "$NVMF_PORT", 00:31:49.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.847 "hdgst": ${hdgst:-false}, 00:31:49.847 "ddgst": ${ddgst:-false} 00:31:49.847 }, 00:31:49.847 "method": "bdev_nvme_attach_controller" 00:31:49.847 } 00:31:49.847 EOF 00:31:49.847 )") 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:49.847 { 00:31:49.847 "params": { 00:31:49.847 "name": "Nvme$subsystem", 00:31:49.847 "trtype": "$TEST_TRANSPORT", 00:31:49.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.847 "adrfam": "ipv4", 00:31:49.847 "trsvcid": "$NVMF_PORT", 00:31:49.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.847 "hdgst": ${hdgst:-false}, 00:31:49.847 "ddgst": ${ddgst:-false} 00:31:49.847 }, 00:31:49.847 "method": "bdev_nvme_attach_controller" 00:31:49.847 } 00:31:49.847 EOF 00:31:49.847 )") 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:49.847 { 00:31:49.847 "params": { 00:31:49.847 "name": "Nvme$subsystem", 00:31:49.847 "trtype": "$TEST_TRANSPORT", 00:31:49.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.847 "adrfam": "ipv4", 00:31:49.847 "trsvcid": "$NVMF_PORT", 00:31:49.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.847 "hdgst": ${hdgst:-false}, 00:31:49.847 "ddgst": ${ddgst:-false} 00:31:49.847 }, 00:31:49.847 "method": "bdev_nvme_attach_controller" 00:31:49.847 } 00:31:49.847 EOF 00:31:49.847 )") 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:49.847 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:49.848 "params": { 00:31:49.848 "name": "Nvme0", 00:31:49.848 "trtype": "tcp", 00:31:49.848 "traddr": "10.0.0.2", 00:31:49.848 "adrfam": "ipv4", 00:31:49.848 "trsvcid": "4420", 00:31:49.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:49.848 "hdgst": false, 00:31:49.848 "ddgst": false 00:31:49.848 }, 00:31:49.848 "method": "bdev_nvme_attach_controller" 00:31:49.848 },{ 00:31:49.848 "params": { 00:31:49.848 "name": "Nvme1", 00:31:49.848 "trtype": "tcp", 00:31:49.848 "traddr": "10.0.0.2", 00:31:49.848 "adrfam": "ipv4", 00:31:49.848 "trsvcid": "4420", 00:31:49.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:49.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:49.848 "hdgst": false, 00:31:49.848 "ddgst": false 00:31:49.848 }, 00:31:49.848 "method": "bdev_nvme_attach_controller" 00:31:49.848 },{ 00:31:49.848 "params": { 00:31:49.848 "name": "Nvme2", 00:31:49.848 "trtype": "tcp", 00:31:49.848 "traddr": "10.0.0.2", 00:31:49.848 "adrfam": "ipv4", 00:31:49.848 "trsvcid": "4420", 00:31:49.848 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:49.848 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:49.848 "hdgst": false, 00:31:49.848 "ddgst": false 00:31:49.848 }, 00:31:49.848 "method": "bdev_nvme_attach_controller" 00:31:49.848 }' 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:49.848 12:18:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.848 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:49.848 ... 00:31:49.848 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:49.848 ... 00:31:49.848 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:49.848 ... 00:31:49.848 fio-3.35 00:31:49.848 Starting 24 threads 00:31:49.848 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.052 00:32:02.052 filename0: (groupid=0, jobs=1): err= 0: pid=142033: Thu Jul 25 12:18:37 2024 00:32:02.052 read: IOPS=420, BW=1682KiB/s (1722kB/s)(16.4MiB/10010msec) 00:32:02.052 slat (usec): min=10, max=121, avg=47.33, stdev=22.99 00:32:02.052 clat (usec): min=26439, max=51647, avg=37594.13, stdev=886.04 00:32:02.052 lat (usec): min=26450, max=51666, avg=37641.46, stdev=887.47 00:32:02.052 clat percentiles (usec): 00:32:02.052 | 1.00th=[36963], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:02.053 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:02.053 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.053 | 99.00th=[38536], 99.50th=[39060], 99.90th=[46924], 99.95th=[46924], 00:32:02.053 | 99.99th=[51643] 00:32:02.053 bw ( KiB/s): min= 1660, max= 1792, per=4.17%, avg=1677.21, stdev=40.48, samples=19 00:32:02.053 iops : min= 415, max= 448, avg=419.26, stdev=10.13, samples=19 00:32:02.053 lat (msec) : 50=99.95%, 100=0.05% 00:32:02.053 cpu : usr=98.53%, sys=1.09%, ctx=14, majf=0, minf=34 00:32:02.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.053 filename0: (groupid=0, jobs=1): err= 0: pid=142034: Thu Jul 25 12:18:37 2024 00:32:02.053 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10024msec) 00:32:02.053 slat (nsec): min=7227, max=54817, avg=27901.57, stdev=8015.17 00:32:02.053 clat (usec): min=23917, max=65791, avg=37876.25, stdev=1196.38 00:32:02.053 lat (usec): min=23929, max=65806, avg=37904.15, stdev=1195.71 00:32:02.053 clat percentiles (usec): 00:32:02.053 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.053 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:32:02.053 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.053 | 99.00th=[39584], 99.50th=[40633], 99.90th=[52167], 99.95th=[52167], 00:32:02.053 | 99.99th=[65799] 00:32:02.053 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1676.40, stdev=56.88, samples=20 00:32:02.053 iops : min= 384, max= 448, avg=419.10, stdev=14.22, samples=20 00:32:02.053 lat (msec) : 50=99.57%, 100=0.43% 00:32:02.053 cpu : usr=98.79%, sys=0.81%, ctx=14, majf=0, minf=28 00:32:02.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.053 filename0: (groupid=0, jobs=1): err= 0: pid=142035: Thu Jul 25 12:18:37 2024 00:32:02.053 read: IOPS=418, BW=1675KiB/s (1715kB/s)(16.4MiB/10030msec) 00:32:02.053 slat (usec): min=4, max=108, avg=42.63, stdev=24.50 00:32:02.053 clat (usec): min=19088, max=75465, avg=37787.50, stdev=3168.86 00:32:02.053 lat (usec): min=19105, max=75481, avg=37830.13, stdev=3167.95 00:32:02.053 clat percentiles (usec): 00:32:02.053 | 1.00th=[31327], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:02.053 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:02.053 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.053 | 99.00th=[50070], 99.50th=[57410], 99.90th=[74974], 99.95th=[74974], 00:32:02.053 | 99.99th=[74974] 00:32:02.053 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1673.20, stdev=54.88, samples=20 00:32:02.053 iops : min= 384, max= 448, avg=418.30, stdev=13.72, samples=20 00:32:02.053 lat (msec) : 20=0.10%, 50=98.81%, 100=1.10% 00:32:02.053 cpu : usr=98.86%, sys=0.74%, ctx=13, majf=0, minf=38 00:32:02.053 IO depths : 1=4.8%, 2=10.6%, 4=23.6%, 8=53.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:32:02.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 issued rwts: total=4200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.053 filename0: (groupid=0, jobs=1): err= 0: pid=142036: Thu Jul 25 12:18:37 2024 00:32:02.053 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10024msec) 00:32:02.053 slat (nsec): min=10519, max=55097, avg=28131.51, stdev=8001.69 00:32:02.053 clat (usec): min=22707, max=52936, avg=37876.49, stdev=1216.64 00:32:02.053 lat (usec): min=22719, max=52966, avg=37904.62, stdev=1216.14 00:32:02.053 clat percentiles (usec): 00:32:02.053 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.053 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:32:02.053 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.053 | 99.00th=[39584], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:32:02.053 | 99.99th=[52691] 00:32:02.053 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1676.40, stdev=56.88, samples=20 00:32:02.053 iops : min= 384, max= 448, avg=419.10, stdev=14.22, samples=20 00:32:02.053 lat (msec) : 50=99.48%, 100=0.52% 00:32:02.053 cpu : usr=98.80%, sys=0.81%, ctx=15, majf=0, minf=46 00:32:02.053 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.053 filename0: (groupid=0, jobs=1): err= 0: pid=142037: Thu Jul 25 12:18:37 2024 00:32:02.053 read: IOPS=419, BW=1677KiB/s (1717kB/s)(16.4MiB/10039msec) 00:32:02.053 slat (nsec): min=6185, max=56317, avg=19554.53, stdev=6695.24 00:32:02.053 clat (usec): min=25324, max=61864, avg=38005.14, stdev=1465.82 00:32:02.053 lat (usec): min=25337, max=61881, avg=38024.69, stdev=1465.27 00:32:02.053 clat percentiles (usec): 00:32:02.053 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.053 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:32:02.053 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:32:02.053 | 99.00th=[39060], 99.50th=[50594], 99.90th=[55313], 99.95th=[55313], 00:32:02.053 | 99.99th=[61604] 00:32:02.053 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1675.90, stdev=57.04, samples=20 00:32:02.053 iops : min= 384, max= 448, avg=418.95, stdev=14.27, samples=20 00:32:02.053 lat (msec) : 50=99.24%, 100=0.76% 00:32:02.053 cpu : usr=98.77%, sys=0.84%, ctx=15, majf=0, minf=49 00:32:02.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.053 filename0: (groupid=0, jobs=1): err= 0: pid=142038: Thu Jul 25 12:18:37 2024 00:32:02.053 read: IOPS=419, BW=1679KiB/s (1720kB/s)(16.4MiB/10023msec) 00:32:02.053 slat (nsec): min=9355, max=90459, avg=13038.23, stdev=4312.39 00:32:02.053 clat (usec): min=21829, max=53752, avg=37991.89, stdev=1752.95 00:32:02.053 lat (usec): min=21840, max=53764, avg=38004.93, stdev=1753.46 00:32:02.053 clat percentiles (usec): 00:32:02.053 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:32:02.053 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:32:02.053 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:32:02.053 | 99.00th=[46924], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:32:02.053 | 99.99th=[53740] 00:32:02.053 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1676.40, stdev=56.88, samples=20 00:32:02.053 iops : min= 384, max= 448, avg=419.10, stdev=14.22, samples=20 00:32:02.053 lat (msec) : 50=99.48%, 100=0.52% 00:32:02.053 cpu : usr=98.61%, sys=0.99%, ctx=14, majf=0, minf=71 00:32:02.053 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:02.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.053 filename0: (groupid=0, jobs=1): err= 0: pid=142039: Thu Jul 25 12:18:37 2024 00:32:02.053 read: IOPS=422, BW=1691KiB/s (1731kB/s)(16.6MiB/10032msec) 00:32:02.053 slat (nsec): min=6616, max=82302, avg=30593.81, stdev=16232.20 00:32:02.053 clat (usec): min=13581, max=39248, avg=37617.42, stdev=1869.14 00:32:02.053 lat (usec): min=13589, max=39274, avg=37648.02, stdev=1869.63 00:32:02.053 clat percentiles (usec): 00:32:02.053 | 1.00th=[30016], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.053 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:32:02.053 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.053 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:32:02.053 | 99.99th=[39060] 00:32:02.053 bw ( KiB/s): min= 1660, max= 1792, per=4.20%, avg=1689.20, stdev=52.75, samples=20 00:32:02.053 iops : min= 415, max= 448, avg=422.30, stdev=13.19, samples=20 00:32:02.053 lat (msec) : 20=0.38%, 50=99.62% 00:32:02.053 cpu : usr=99.25%, sys=0.45%, ctx=12, majf=0, minf=57 00:32:02.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.053 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.053 filename0: (groupid=0, jobs=1): err= 0: pid=142040: Thu Jul 25 12:18:37 2024 00:32:02.053 read: IOPS=420, BW=1681KiB/s (1721kB/s)(16.5MiB/10033msec) 00:32:02.054 slat (usec): min=5, max=117, avg=42.21, stdev=23.97 00:32:02.054 clat (usec): min=18683, max=78048, avg=37663.24, stdev=2980.17 00:32:02.054 lat (usec): min=18695, max=78065, avg=37705.44, stdev=2980.50 00:32:02.054 clat percentiles (usec): 00:32:02.054 | 1.00th=[25822], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:02.054 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:02.054 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.054 | 99.00th=[48497], 99.50th=[56361], 99.90th=[65274], 99.95th=[78119], 00:32:02.054 | 99.99th=[78119] 00:32:02.054 bw ( KiB/s): min= 1523, max= 1808, per=4.17%, avg=1679.05, stdev=60.97, samples=20 00:32:02.054 iops : min= 380, max= 452, avg=419.70, stdev=15.36, samples=20 00:32:02.054 lat (msec) : 20=0.09%, 50=98.98%, 100=0.93% 00:32:02.054 cpu : usr=98.64%, sys=0.97%, ctx=14, majf=0, minf=32 00:32:02.054 IO depths : 1=4.8%, 2=10.5%, 4=23.1%, 8=53.6%, 16=8.0%, 32=0.0%, >=64=0.0% 00:32:02.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 issued rwts: total=4216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.054 filename1: (groupid=0, jobs=1): err= 0: pid=142041: Thu Jul 25 12:18:37 2024 00:32:02.054 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10024msec) 00:32:02.054 slat (nsec): min=10058, max=55264, avg=27054.35, stdev=8164.74 00:32:02.054 clat (usec): min=36597, max=52299, avg=37889.67, stdev=959.12 00:32:02.054 lat (usec): min=36625, max=52317, avg=37916.72, stdev=958.14 00:32:02.054 clat percentiles (usec): 00:32:02.054 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.054 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:32:02.054 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.054 | 99.00th=[39584], 99.50th=[40633], 99.90th=[52167], 99.95th=[52167], 00:32:02.054 | 99.99th=[52167] 00:32:02.054 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1676.40, stdev=56.88, samples=20 00:32:02.054 iops : min= 384, max= 448, avg=419.10, stdev=14.22, samples=20 00:32:02.054 lat (msec) : 50=99.62%, 100=0.38% 00:32:02.054 cpu : usr=98.58%, sys=1.03%, ctx=19, majf=0, minf=61 00:32:02.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.054 filename1: (groupid=0, jobs=1): err= 0: pid=142042: Thu Jul 25 12:18:37 2024 00:32:02.054 read: IOPS=419, BW=1677KiB/s (1717kB/s)(16.4MiB/10037msec) 00:32:02.054 slat (nsec): min=6275, max=50608, avg=23954.89, stdev=7492.37 00:32:02.054 clat (usec): min=35988, max=55540, avg=37964.93, stdev=1329.64 00:32:02.054 lat (usec): min=36012, max=55564, avg=37988.89, stdev=1328.77 00:32:02.054 clat percentiles (usec): 00:32:02.054 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.054 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:32:02.054 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:32:02.054 | 99.00th=[39060], 99.50th=[49546], 99.90th=[55313], 99.95th=[55313], 00:32:02.054 | 99.99th=[55313] 00:32:02.054 bw ( KiB/s): min= 1539, max= 1792, per=4.17%, avg=1676.05, stdev=56.65, samples=20 00:32:02.054 iops : min= 384, max= 448, avg=418.95, stdev=14.27, samples=20 00:32:02.054 lat (msec) : 50=99.62%, 100=0.38% 00:32:02.054 cpu : usr=98.89%, sys=0.73%, ctx=13, majf=0, minf=39 00:32:02.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.054 filename1: (groupid=0, jobs=1): err= 0: pid=142043: Thu Jul 25 12:18:37 2024 00:32:02.054 read: IOPS=419, BW=1679KiB/s (1720kB/s)(16.4MiB/10023msec) 00:32:02.054 slat (nsec): min=9597, max=56775, avg=24660.34, stdev=8164.20 00:32:02.054 clat (usec): min=36713, max=52099, avg=37911.57, stdev=939.21 00:32:02.054 lat (usec): min=36751, max=52118, avg=37936.23, stdev=938.35 00:32:02.054 clat percentiles (usec): 00:32:02.054 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.054 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:32:02.054 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:32:02.054 | 99.00th=[39584], 99.50th=[40109], 99.90th=[52167], 99.95th=[52167], 00:32:02.054 | 99.99th=[52167] 00:32:02.054 bw ( KiB/s): min= 1539, max= 1792, per=4.17%, avg=1676.55, stdev=56.49, samples=20 00:32:02.054 iops : min= 384, max= 448, avg=419.10, stdev=14.22, samples=20 00:32:02.054 lat (msec) : 50=99.62%, 100=0.38% 00:32:02.054 cpu : usr=98.78%, sys=0.83%, ctx=12, majf=0, minf=50 00:32:02.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.054 filename1: (groupid=0, jobs=1): err= 0: pid=142044: Thu Jul 25 12:18:37 2024 00:32:02.054 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10034msec) 00:32:02.054 slat (usec): min=6, max=108, avg=44.15, stdev=23.84 00:32:02.054 clat (usec): min=27046, max=54857, avg=37689.10, stdev=1536.68 00:32:02.054 lat (usec): min=27060, max=54938, avg=37733.25, stdev=1536.67 00:32:02.054 clat percentiles (usec): 00:32:02.054 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:02.054 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:02.054 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.054 | 99.00th=[41157], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:32:02.054 | 99.99th=[54789] 00:32:02.054 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1678.10, stdev=57.00, samples=20 00:32:02.054 iops : min= 384, max= 448, avg=419.50, stdev=14.24, samples=20 00:32:02.054 lat (msec) : 50=99.38%, 100=0.62% 00:32:02.054 cpu : usr=98.58%, sys=1.02%, ctx=20, majf=0, minf=37 00:32:02.054 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:02.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 issued rwts: total=4204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.054 filename1: (groupid=0, jobs=1): err= 0: pid=142045: Thu Jul 25 12:18:37 2024 00:32:02.054 read: IOPS=420, BW=1682KiB/s (1722kB/s)(16.4MiB/10010msec) 00:32:02.054 slat (usec): min=8, max=115, avg=49.09, stdev=22.27 00:32:02.054 clat (usec): min=25466, max=51701, avg=37596.12, stdev=932.89 00:32:02.054 lat (usec): min=25477, max=51734, avg=37645.21, stdev=933.34 00:32:02.054 clat percentiles (usec): 00:32:02.054 | 1.00th=[36963], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:02.054 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:02.054 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.054 | 99.00th=[38536], 99.50th=[39060], 99.90th=[46924], 99.95th=[51643], 00:32:02.054 | 99.99th=[51643] 00:32:02.054 bw ( KiB/s): min= 1660, max= 1792, per=4.17%, avg=1677.21, stdev=40.48, samples=19 00:32:02.054 iops : min= 415, max= 448, avg=419.26, stdev=10.13, samples=19 00:32:02.054 lat (msec) : 50=99.90%, 100=0.10% 00:32:02.054 cpu : usr=98.70%, sys=0.92%, ctx=11, majf=0, minf=52 00:32:02.054 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.054 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.054 filename1: (groupid=0, jobs=1): err= 0: pid=142046: Thu Jul 25 12:18:37 2024 00:32:02.054 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10024msec) 00:32:02.054 slat (nsec): min=10285, max=57877, avg=27926.01, stdev=8102.26 00:32:02.054 clat (usec): min=36640, max=52459, avg=37863.58, stdev=972.02 00:32:02.054 lat (usec): min=36662, max=52482, avg=37891.51, stdev=971.66 00:32:02.054 clat percentiles (usec): 00:32:02.054 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.054 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:32:02.054 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.054 | 99.00th=[39584], 99.50th=[40633], 99.90th=[52167], 99.95th=[52167], 00:32:02.054 | 99.99th=[52691] 00:32:02.054 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1676.40, stdev=56.88, samples=20 00:32:02.055 iops : min= 384, max= 448, avg=419.10, stdev=14.22, samples=20 00:32:02.055 lat (msec) : 50=99.62%, 100=0.38% 00:32:02.055 cpu : usr=98.69%, sys=0.91%, ctx=22, majf=0, minf=50 00:32:02.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.055 filename1: (groupid=0, jobs=1): err= 0: pid=142047: Thu Jul 25 12:18:37 2024 00:32:02.055 read: IOPS=422, BW=1691KiB/s (1732kB/s)(16.6MiB/10027msec) 00:32:02.055 slat (nsec): min=8966, max=91530, avg=39423.47, stdev=14391.25 00:32:02.055 clat (usec): min=8877, max=39133, avg=37487.12, stdev=2096.57 00:32:02.055 lat (usec): min=8887, max=39159, avg=37526.54, stdev=2098.41 00:32:02.055 clat percentiles (usec): 00:32:02.055 | 1.00th=[29754], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:02.055 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:32:02.055 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.055 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:32:02.055 | 99.99th=[39060] 00:32:02.055 bw ( KiB/s): min= 1660, max= 1795, per=4.20%, avg=1689.35, stdev=53.06, samples=20 00:32:02.055 iops : min= 415, max= 448, avg=422.30, stdev=13.19, samples=20 00:32:02.055 lat (msec) : 10=0.38%, 50=99.62% 00:32:02.055 cpu : usr=99.07%, sys=0.61%, ctx=12, majf=0, minf=55 00:32:02.055 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.055 filename1: (groupid=0, jobs=1): err= 0: pid=142048: Thu Jul 25 12:18:37 2024 00:32:02.055 read: IOPS=418, BW=1672KiB/s (1712kB/s)(16.4MiB/10027msec) 00:32:02.055 slat (nsec): min=4575, max=55849, avg=27070.65, stdev=7290.63 00:32:02.055 clat (usec): min=36680, max=74843, avg=38028.45, stdev=2550.65 00:32:02.055 lat (usec): min=36712, max=74874, avg=38055.52, stdev=2549.54 00:32:02.055 clat percentiles (usec): 00:32:02.055 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.055 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:32:02.055 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.055 | 99.00th=[39060], 99.50th=[55837], 99.90th=[74974], 99.95th=[74974], 00:32:02.055 | 99.99th=[74974] 00:32:02.055 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1670.00, stdev=65.39, samples=20 00:32:02.055 iops : min= 384, max= 448, avg=417.50, stdev=16.35, samples=20 00:32:02.055 lat (msec) : 50=99.24%, 100=0.76% 00:32:02.055 cpu : usr=98.61%, sys=1.01%, ctx=15, majf=0, minf=43 00:32:02.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.055 filename2: (groupid=0, jobs=1): err= 0: pid=142049: Thu Jul 25 12:18:37 2024 00:32:02.055 read: IOPS=422, BW=1691KiB/s (1732kB/s)(16.6MiB/10028msec) 00:32:02.055 slat (usec): min=9, max=108, avg=49.95, stdev=21.95 00:32:02.055 clat (usec): min=9720, max=39148, avg=37379.64, stdev=2057.20 00:32:02.055 lat (usec): min=9731, max=39206, avg=37429.59, stdev=2060.42 00:32:02.055 clat percentiles (usec): 00:32:02.055 | 1.00th=[30016], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:32:02.055 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:02.055 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.055 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:32:02.055 | 99.99th=[39060] 00:32:02.055 bw ( KiB/s): min= 1660, max= 1792, per=4.20%, avg=1689.20, stdev=52.75, samples=20 00:32:02.055 iops : min= 415, max= 448, avg=422.30, stdev=13.19, samples=20 00:32:02.055 lat (msec) : 10=0.38%, 50=99.62% 00:32:02.055 cpu : usr=98.72%, sys=0.90%, ctx=10, majf=0, minf=47 00:32:02.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.055 filename2: (groupid=0, jobs=1): err= 0: pid=142050: Thu Jul 25 12:18:37 2024 00:32:02.055 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.5MiB/10053msec) 00:32:02.055 slat (nsec): min=9322, max=91016, avg=22761.25, stdev=8670.04 00:32:02.055 clat (usec): min=26366, max=55938, avg=38005.47, stdev=1643.84 00:32:02.055 lat (usec): min=26378, max=55976, avg=38028.23, stdev=1643.09 00:32:02.055 clat percentiles (usec): 00:32:02.055 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.055 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:32:02.055 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:32:02.055 | 99.00th=[46924], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:32:02.055 | 99.99th=[55837] 00:32:02.055 bw ( KiB/s): min= 1600, max= 1776, per=4.17%, avg=1678.95, stdev=44.85, samples=20 00:32:02.055 iops : min= 400, max= 444, avg=419.70, stdev=11.24, samples=20 00:32:02.055 lat (msec) : 50=99.43%, 100=0.57% 00:32:02.055 cpu : usr=98.82%, sys=0.78%, ctx=11, majf=0, minf=56 00:32:02.055 IO depths : 1=0.2%, 2=6.4%, 4=24.9%, 8=56.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:32:02.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 issued rwts: total=4212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.055 filename2: (groupid=0, jobs=1): err= 0: pid=142051: Thu Jul 25 12:18:37 2024 00:32:02.055 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10029msec) 00:32:02.055 slat (usec): min=5, max=105, avg=22.80, stdev=17.85 00:32:02.055 clat (usec): min=19277, max=93235, avg=37652.47, stdev=4834.11 00:32:02.055 lat (usec): min=19343, max=93252, avg=37675.28, stdev=4832.35 00:32:02.055 clat percentiles (usec): 00:32:02.055 | 1.00th=[29230], 5.00th=[30540], 10.00th=[31589], 20.00th=[33162], 00:32:02.055 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:32:02.055 | 70.00th=[38011], 80.00th=[38536], 90.00th=[43779], 95.00th=[44827], 00:32:02.055 | 99.00th=[47449], 99.50th=[55837], 99.90th=[74974], 99.95th=[74974], 00:32:02.055 | 99.99th=[92799] 00:32:02.055 bw ( KiB/s): min= 1456, max= 1776, per=4.22%, avg=1696.40, stdev=62.90, samples=20 00:32:02.055 iops : min= 364, max= 444, avg=424.10, stdev=15.72, samples=20 00:32:02.055 lat (msec) : 20=0.09%, 50=98.92%, 100=0.99% 00:32:02.055 cpu : usr=98.99%, sys=0.60%, ctx=10, majf=0, minf=49 00:32:02.055 IO depths : 1=0.1%, 2=0.3%, 4=2.9%, 8=80.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:32:02.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 issued rwts: total=4248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.055 filename2: (groupid=0, jobs=1): err= 0: pid=142052: Thu Jul 25 12:18:37 2024 00:32:02.055 read: IOPS=422, BW=1690KiB/s (1731kB/s)(16.6MiB/10035msec) 00:32:02.055 slat (usec): min=5, max=123, avg=51.61, stdev=21.56 00:32:02.055 clat (usec): min=16924, max=39191, avg=37435.01, stdev=1712.48 00:32:02.055 lat (usec): min=16929, max=39251, avg=37486.61, stdev=1714.39 00:32:02.055 clat percentiles (usec): 00:32:02.055 | 1.00th=[29754], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:02.055 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:02.055 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.055 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:32:02.055 | 99.99th=[39060] 00:32:02.055 bw ( KiB/s): min= 1660, max= 1792, per=4.20%, avg=1689.20, stdev=52.75, samples=20 00:32:02.055 iops : min= 415, max= 448, avg=422.30, stdev=13.19, samples=20 00:32:02.055 lat (msec) : 20=0.38%, 50=99.62% 00:32:02.055 cpu : usr=98.74%, sys=0.84%, ctx=18, majf=0, minf=47 00:32:02.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.055 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.055 filename2: (groupid=0, jobs=1): err= 0: pid=142053: Thu Jul 25 12:18:37 2024 00:32:02.055 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10024msec) 00:32:02.055 slat (nsec): min=15648, max=74079, avg=31628.88, stdev=9943.72 00:32:02.055 clat (usec): min=36591, max=52530, avg=37795.33, stdev=978.97 00:32:02.055 lat (usec): min=36612, max=52556, avg=37826.96, stdev=979.52 00:32:02.055 clat percentiles (usec): 00:32:02.055 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.055 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:32:02.056 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.056 | 99.00th=[39584], 99.50th=[40633], 99.90th=[52167], 99.95th=[52691], 00:32:02.056 | 99.99th=[52691] 00:32:02.056 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1676.40, stdev=56.88, samples=20 00:32:02.056 iops : min= 384, max= 448, avg=419.10, stdev=14.22, samples=20 00:32:02.056 lat (msec) : 50=99.62%, 100=0.38% 00:32:02.056 cpu : usr=98.49%, sys=1.04%, ctx=14, majf=0, minf=48 00:32:02.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.056 filename2: (groupid=0, jobs=1): err= 0: pid=142054: Thu Jul 25 12:18:37 2024 00:32:02.056 read: IOPS=419, BW=1677KiB/s (1717kB/s)(16.4MiB/10036msec) 00:32:02.056 slat (nsec): min=9893, max=79632, avg=35725.40, stdev=6084.84 00:32:02.056 clat (usec): min=26089, max=55748, avg=37831.89, stdev=1352.86 00:32:02.056 lat (usec): min=26115, max=55791, avg=37867.61, stdev=1352.35 00:32:02.056 clat percentiles (usec): 00:32:02.056 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.056 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:32:02.056 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.056 | 99.00th=[39060], 99.50th=[47973], 99.90th=[55837], 99.95th=[55837], 00:32:02.056 | 99.99th=[55837] 00:32:02.056 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1676.60, stdev=57.30, samples=20 00:32:02.056 iops : min= 384, max= 448, avg=419.15, stdev=14.32, samples=20 00:32:02.056 lat (msec) : 50=99.62%, 100=0.38% 00:32:02.056 cpu : usr=98.33%, sys=1.21%, ctx=12, majf=0, minf=46 00:32:02.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.056 filename2: (groupid=0, jobs=1): err= 0: pid=142055: Thu Jul 25 12:18:37 2024 00:32:02.056 read: IOPS=418, BW=1672KiB/s (1713kB/s)(16.4MiB/10026msec) 00:32:02.056 slat (nsec): min=9816, max=78979, avg=35433.56, stdev=6078.79 00:32:02.056 clat (usec): min=36649, max=74451, avg=37940.38, stdev=2533.64 00:32:02.056 lat (usec): min=36670, max=74478, avg=37975.81, stdev=2532.50 00:32:02.056 clat percentiles (usec): 00:32:02.056 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:02.056 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:32:02.056 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:02.056 | 99.00th=[39060], 99.50th=[55837], 99.90th=[73925], 99.95th=[73925], 00:32:02.056 | 99.99th=[74974] 00:32:02.056 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1670.15, stdev=65.07, samples=20 00:32:02.056 iops : min= 384, max= 448, avg=417.50, stdev=16.35, samples=20 00:32:02.056 lat (msec) : 50=99.24%, 100=0.76% 00:32:02.056 cpu : usr=98.12%, sys=1.42%, ctx=14, majf=0, minf=40 00:32:02.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.056 filename2: (groupid=0, jobs=1): err= 0: pid=142056: Thu Jul 25 12:18:37 2024 00:32:02.056 read: IOPS=417, BW=1671KiB/s (1711kB/s)(16.4MiB/10029msec) 00:32:02.056 slat (usec): min=4, max=106, avg=37.90, stdev=22.70 00:32:02.056 clat (usec): min=28035, max=74695, avg=37983.90, stdev=2577.01 00:32:02.056 lat (usec): min=28052, max=74709, avg=38021.80, stdev=2574.32 00:32:02.056 clat percentiles (usec): 00:32:02.056 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:02.056 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:32:02.056 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:32:02.056 | 99.00th=[46400], 99.50th=[54264], 99.90th=[74974], 99.95th=[74974], 00:32:02.056 | 99.99th=[74974] 00:32:02.056 bw ( KiB/s): min= 1520, max= 1776, per=4.15%, avg=1670.00, stdev=49.08, samples=20 00:32:02.056 iops : min= 380, max= 444, avg=417.50, stdev=12.27, samples=20 00:32:02.056 lat (msec) : 50=99.40%, 100=0.60% 00:32:02.056 cpu : usr=98.80%, sys=0.81%, ctx=14, majf=0, minf=57 00:32:02.056 IO depths : 1=0.1%, 2=6.3%, 4=24.9%, 8=56.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.056 issued rwts: total=4190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:02.056 00:32:02.056 Run status group 0 (all jobs): 00:32:02.056 READ: bw=39.3MiB/s (41.2MB/s), 1671KiB/s-1694KiB/s (1711kB/s-1735kB/s), io=395MiB (414MB), run=10010-10053msec 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 bdev_null0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.056 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.057 [2024-07-25 12:18:38.258042] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.057 bdev_null1 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:02.057 { 00:32:02.057 "params": { 00:32:02.057 "name": "Nvme$subsystem", 00:32:02.057 "trtype": "$TEST_TRANSPORT", 00:32:02.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:02.057 "adrfam": "ipv4", 00:32:02.057 "trsvcid": "$NVMF_PORT", 00:32:02.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:02.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:02.057 "hdgst": ${hdgst:-false}, 00:32:02.057 "ddgst": ${ddgst:-false} 00:32:02.057 }, 00:32:02.057 "method": "bdev_nvme_attach_controller" 00:32:02.057 } 00:32:02.057 EOF 00:32:02.057 )") 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:02.057 { 00:32:02.057 "params": { 00:32:02.057 "name": "Nvme$subsystem", 00:32:02.057 "trtype": "$TEST_TRANSPORT", 00:32:02.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:02.057 "adrfam": "ipv4", 00:32:02.057 "trsvcid": "$NVMF_PORT", 00:32:02.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:02.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:02.057 "hdgst": ${hdgst:-false}, 00:32:02.057 "ddgst": ${ddgst:-false} 00:32:02.057 }, 00:32:02.057 "method": "bdev_nvme_attach_controller" 00:32:02.057 } 00:32:02.057 EOF 00:32:02.057 )") 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:02.057 "params": { 00:32:02.057 "name": "Nvme0", 00:32:02.057 "trtype": "tcp", 00:32:02.057 "traddr": "10.0.0.2", 00:32:02.057 "adrfam": "ipv4", 00:32:02.057 "trsvcid": "4420", 00:32:02.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.057 "hdgst": false, 00:32:02.057 "ddgst": false 00:32:02.057 }, 00:32:02.057 "method": "bdev_nvme_attach_controller" 00:32:02.057 },{ 00:32:02.057 "params": { 00:32:02.057 "name": "Nvme1", 00:32:02.057 "trtype": "tcp", 00:32:02.057 "traddr": "10.0.0.2", 00:32:02.057 "adrfam": "ipv4", 00:32:02.057 "trsvcid": "4420", 00:32:02.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:02.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:02.057 "hdgst": false, 00:32:02.057 "ddgst": false 00:32:02.057 }, 00:32:02.057 "method": "bdev_nvme_attach_controller" 00:32:02.057 }' 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:02.057 12:18:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.057 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:02.057 ... 00:32:02.057 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:02.057 ... 00:32:02.057 fio-3.35 00:32:02.057 Starting 4 threads 00:32:02.057 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.327 00:32:07.327 filename0: (groupid=0, jobs=1): err= 0: pid=144276: Thu Jul 25 12:18:44 2024 00:32:07.327 read: IOPS=1781, BW=13.9MiB/s (14.6MB/s)(69.6MiB/5003msec) 00:32:07.327 slat (nsec): min=9243, max=36478, avg=12052.40, stdev=3086.44 00:32:07.327 clat (usec): min=2156, max=7918, avg=4455.68, stdev=779.47 00:32:07.327 lat (usec): min=2172, max=7934, avg=4467.73, stdev=779.21 00:32:07.327 clat percentiles (usec): 00:32:07.327 | 1.00th=[ 3294], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 3916], 00:32:07.327 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4359], 00:32:07.327 | 70.00th=[ 4424], 80.00th=[ 4752], 90.00th=[ 6063], 95.00th=[ 6128], 00:32:07.327 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 7767], 99.95th=[ 7832], 00:32:07.327 | 99.99th=[ 7898] 00:32:07.327 bw ( KiB/s): min=13968, max=14560, per=25.09%, avg=14209.33, stdev=193.95, samples=9 00:32:07.327 iops : min= 1746, max= 1820, avg=1776.11, stdev=24.23, samples=9 00:32:07.327 lat (msec) : 4=24.00%, 10=76.00% 00:32:07.327 cpu : usr=96.44%, sys=3.20%, ctx=6, majf=0, minf=51 00:32:07.327 IO depths : 1=0.2%, 2=1.4%, 4=69.7%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.327 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.327 issued rwts: total=8915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.327 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:07.327 filename0: (groupid=0, jobs=1): err= 0: pid=144277: Thu Jul 25 12:18:44 2024 00:32:07.327 read: IOPS=1789, BW=14.0MiB/s (14.7MB/s)(69.9MiB/5002msec) 00:32:07.327 slat (nsec): min=8397, max=40726, avg=12407.27, stdev=3346.66 00:32:07.327 clat (usec): min=1591, max=8063, avg=4434.98, stdev=812.28 00:32:07.327 lat (usec): min=1601, max=8074, avg=4447.38, stdev=811.83 00:32:07.327 clat percentiles (usec): 00:32:07.327 | 1.00th=[ 3064], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 3916], 00:32:07.327 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4359], 00:32:07.327 | 70.00th=[ 4424], 80.00th=[ 4817], 90.00th=[ 6063], 95.00th=[ 6128], 00:32:07.327 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 7963], 00:32:07.327 | 99.99th=[ 8094] 00:32:07.327 bw ( KiB/s): min=13584, max=14928, per=25.17%, avg=14250.67, stdev=403.98, samples=9 00:32:07.327 iops : min= 1698, max= 1866, avg=1781.33, stdev=50.50, samples=9 00:32:07.327 lat (msec) : 2=0.06%, 4=27.50%, 10=72.44% 00:32:07.327 cpu : usr=96.66%, sys=3.00%, ctx=7, majf=0, minf=49 00:32:07.327 IO depths : 1=0.1%, 2=1.5%, 4=70.3%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.327 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.327 issued rwts: total=8952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.327 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:07.327 filename1: (groupid=0, jobs=1): err= 0: pid=144278: Thu Jul 25 12:18:44 2024 00:32:07.327 read: IOPS=1789, BW=14.0MiB/s (14.7MB/s)(70.0MiB/5003msec) 00:32:07.327 slat (usec): min=9, max=1458, avg=12.43, stdev=15.61 00:32:07.327 clat (usec): min=1677, max=8017, avg=4434.82, stdev=832.33 00:32:07.327 lat (usec): min=1692, max=8026, avg=4447.25, stdev=832.05 00:32:07.327 clat percentiles (usec): 00:32:07.328 | 1.00th=[ 3097], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 3916], 00:32:07.328 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4293], 00:32:07.328 | 70.00th=[ 4424], 80.00th=[ 4752], 90.00th=[ 6128], 95.00th=[ 6194], 00:32:07.328 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7570], 99.95th=[ 7767], 00:32:07.328 | 99.99th=[ 8029] 00:32:07.328 bw ( KiB/s): min=13840, max=14704, per=25.26%, avg=14305.78, stdev=259.65, samples=9 00:32:07.328 iops : min= 1730, max= 1838, avg=1788.22, stdev=32.46, samples=9 00:32:07.328 lat (msec) : 2=0.01%, 4=24.30%, 10=75.69% 00:32:07.328 cpu : usr=96.92%, sys=2.74%, ctx=5, majf=0, minf=29 00:32:07.328 IO depths : 1=0.1%, 2=1.1%, 4=71.0%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.328 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.328 issued rwts: total=8954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.328 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:07.328 filename1: (groupid=0, jobs=1): err= 0: pid=144279: Thu Jul 25 12:18:44 2024 00:32:07.328 read: IOPS=1717, BW=13.4MiB/s (14.1MB/s)(67.1MiB/5001msec) 00:32:07.328 slat (nsec): min=8168, max=41197, avg=12361.76, stdev=3271.88 00:32:07.328 clat (usec): min=1582, max=8339, avg=4624.27, stdev=774.67 00:32:07.328 lat (usec): min=1599, max=8348, avg=4636.63, stdev=774.44 00:32:07.328 clat percentiles (usec): 00:32:07.328 | 1.00th=[ 3163], 5.00th=[ 3752], 10.00th=[ 3949], 20.00th=[ 4080], 00:32:07.328 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4555], 00:32:07.328 | 70.00th=[ 4883], 80.00th=[ 5276], 90.00th=[ 5997], 95.00th=[ 6128], 00:32:07.328 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7767], 99.95th=[ 7832], 00:32:07.328 | 99.99th=[ 8356] 00:32:07.328 bw ( KiB/s): min=12544, max=14224, per=24.49%, avg=13867.89, stdev=540.68, samples=9 00:32:07.328 iops : min= 1568, max= 1778, avg=1733.44, stdev=67.58, samples=9 00:32:07.328 lat (msec) : 2=0.06%, 4=11.33%, 10=88.62% 00:32:07.328 cpu : usr=96.58%, sys=3.06%, ctx=6, majf=0, minf=41 00:32:07.328 IO depths : 1=0.1%, 2=1.0%, 4=68.8%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.328 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.328 issued rwts: total=8591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.328 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:07.328 00:32:07.328 Run status group 0 (all jobs): 00:32:07.328 READ: bw=55.3MiB/s (58.0MB/s), 13.4MiB/s-14.0MiB/s (14.1MB/s-14.7MB/s), io=277MiB (290MB), run=5001-5003msec 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.588 00:32:07.588 real 0m24.765s 00:32:07.588 user 5m9.445s 00:32:07.588 sys 0m4.385s 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.588 12:18:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.588 ************************************ 00:32:07.588 END TEST fio_dif_rand_params 00:32:07.588 ************************************ 00:32:07.588 12:18:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:07.588 12:18:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:07.588 12:18:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.588 12:18:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:07.588 ************************************ 00:32:07.588 START TEST fio_dif_digest 00:32:07.588 ************************************ 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.588 bdev_null0 00:32:07.588 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.589 [2024-07-25 12:18:44.793873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.589 { 00:32:07.589 "params": { 00:32:07.589 "name": "Nvme$subsystem", 00:32:07.589 "trtype": "$TEST_TRANSPORT", 00:32:07.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.589 "adrfam": "ipv4", 00:32:07.589 "trsvcid": "$NVMF_PORT", 00:32:07.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.589 "hdgst": ${hdgst:-false}, 00:32:07.589 "ddgst": ${ddgst:-false} 00:32:07.589 }, 00:32:07.589 "method": "bdev_nvme_attach_controller" 00:32:07.589 } 00:32:07.589 EOF 00:32:07.589 )") 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:07.589 "params": { 00:32:07.589 "name": "Nvme0", 00:32:07.589 "trtype": "tcp", 00:32:07.589 "traddr": "10.0.0.2", 00:32:07.589 "adrfam": "ipv4", 00:32:07.589 "trsvcid": "4420", 00:32:07.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.589 "hdgst": true, 00:32:07.589 "ddgst": true 00:32:07.589 }, 00:32:07.589 "method": "bdev_nvme_attach_controller" 00:32:07.589 }' 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:07.589 12:18:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.246 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:08.246 ... 00:32:08.246 fio-3.35 00:32:08.246 Starting 3 threads 00:32:08.246 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.443 00:32:20.443 filename0: (groupid=0, jobs=1): err= 0: pid=145478: Thu Jul 25 12:18:55 2024 00:32:20.443 read: IOPS=180, BW=22.6MiB/s (23.6MB/s)(227MiB/10048msec) 00:32:20.443 slat (nsec): min=9598, max=90547, avg=22944.26, stdev=8311.14 00:32:20.443 clat (usec): min=12429, max=59150, avg=16550.02, stdev=2752.34 00:32:20.443 lat (usec): min=12440, max=59181, avg=16572.97, stdev=2752.16 00:32:20.443 clat percentiles (usec): 00:32:20.443 | 1.00th=[13698], 5.00th=[14484], 10.00th=[14877], 20.00th=[15401], 00:32:20.443 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16319], 60.00th=[16712], 00:32:20.443 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:32:20.443 | 99.00th=[19530], 99.50th=[20317], 99.90th=[57934], 99.95th=[58983], 00:32:20.443 | 99.99th=[58983] 00:32:20.443 bw ( KiB/s): min=20992, max=24576, per=32.63%, avg=23168.00, stdev=869.13, samples=20 00:32:20.443 iops : min= 164, max= 192, avg=181.00, stdev= 6.79, samples=20 00:32:20.443 lat (msec) : 20=99.34%, 50=0.28%, 100=0.39% 00:32:20.443 cpu : usr=95.47%, sys=4.13%, ctx=25, majf=0, minf=147 00:32:20.443 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.443 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:20.443 filename0: (groupid=0, jobs=1): err= 0: pid=145479: Thu Jul 25 12:18:55 2024 00:32:20.443 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(241MiB/10045msec) 00:32:20.443 slat (nsec): min=9776, max=64004, avg=22011.69, stdev=7148.37 00:32:20.443 clat (usec): min=9277, max=55303, avg=15581.86, stdev=1704.99 00:32:20.443 lat (usec): min=9299, max=55323, avg=15603.87, stdev=1705.31 00:32:20.443 clat percentiles (usec): 00:32:20.443 | 1.00th=[11863], 5.00th=[13698], 10.00th=[14091], 20.00th=[14615], 00:32:20.443 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:32:20.443 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:32:20.443 | 99.00th=[18482], 99.50th=[19530], 99.90th=[49546], 99.95th=[55313], 00:32:20.443 | 99.99th=[55313] 00:32:20.444 bw ( KiB/s): min=23599, max=25600, per=34.73%, avg=24655.15, stdev=527.44, samples=20 00:32:20.444 iops : min= 184, max= 200, avg=192.60, stdev= 4.16, samples=20 00:32:20.444 lat (msec) : 10=0.05%, 20=99.53%, 50=0.36%, 100=0.05% 00:32:20.444 cpu : usr=96.80%, sys=2.80%, ctx=15, majf=0, minf=145 00:32:20.444 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.444 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.444 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:20.444 filename0: (groupid=0, jobs=1): err= 0: pid=145480: Thu Jul 25 12:18:55 2024 00:32:20.444 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(229MiB/10007msec) 00:32:20.444 slat (nsec): min=9687, max=58280, avg=19199.24, stdev=7325.83 00:32:20.444 clat (usec): min=7614, max=22171, avg=16362.35, stdev=1349.20 00:32:20.444 lat (usec): min=7625, max=22188, avg=16381.55, stdev=1349.44 00:32:20.444 clat percentiles (usec): 00:32:20.444 | 1.00th=[11863], 5.00th=[14222], 10.00th=[14877], 20.00th=[15401], 00:32:20.444 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16450], 60.00th=[16712], 00:32:20.444 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:32:20.444 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21627], 99.95th=[22152], 00:32:20.444 | 99.99th=[22152] 00:32:20.444 bw ( KiB/s): min=22784, max=25088, per=32.99%, avg=23424.00, stdev=607.51, samples=20 00:32:20.444 iops : min= 178, max= 196, avg=183.00, stdev= 4.75, samples=20 00:32:20.444 lat (msec) : 10=0.11%, 20=99.34%, 50=0.55% 00:32:20.444 cpu : usr=96.48%, sys=3.15%, ctx=24, majf=0, minf=191 00:32:20.444 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.444 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.444 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:20.444 00:32:20.444 Run status group 0 (all jobs): 00:32:20.444 READ: bw=69.3MiB/s (72.7MB/s), 22.6MiB/s-24.0MiB/s (23.6MB/s-25.2MB/s), io=697MiB (730MB), run=10007-10048msec 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.444 00:32:20.444 real 0m11.237s 00:32:20.444 user 0m41.056s 00:32:20.444 sys 0m1.363s 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:20.444 12:18:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:20.444 ************************************ 00:32:20.444 END TEST fio_dif_digest 00:32:20.444 ************************************ 00:32:20.444 12:18:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:20.444 12:18:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:20.444 rmmod nvme_tcp 00:32:20.444 rmmod nvme_fabrics 00:32:20.444 rmmod nvme_keyring 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 135444 ']' 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 135444 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 135444 ']' 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 135444 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 135444 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 135444' 00:32:20.444 killing process with pid 135444 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@969 -- # kill 135444 00:32:20.444 12:18:56 nvmf_dif -- common/autotest_common.sh@974 -- # wait 135444 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:20.444 12:18:56 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:21.819 Waiting for block devices as requested 00:32:21.819 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:21.819 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:22.077 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:22.077 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:22.077 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:22.336 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:22.336 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:22.336 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:22.336 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:22.595 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:22.595 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:22.595 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:22.855 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:22.855 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:22.855 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:23.114 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:23.114 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:23.114 12:19:00 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:23.114 12:19:00 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:23.114 12:19:00 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:23.114 12:19:00 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:23.114 12:19:00 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.114 12:19:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:23.114 12:19:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.648 12:19:02 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:25.648 00:32:25.648 real 1m15.444s 00:32:25.648 user 7m44.542s 00:32:25.648 sys 0m18.764s 00:32:25.648 12:19:02 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:25.648 12:19:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:25.648 ************************************ 00:32:25.648 END TEST nvmf_dif 00:32:25.648 ************************************ 00:32:25.649 12:19:02 -- spdk/autotest.sh@299 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:25.649 12:19:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:25.649 12:19:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:25.649 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:32:25.649 ************************************ 00:32:25.649 START TEST nvmf_abort_qd_sizes 00:32:25.649 ************************************ 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:25.649 * Looking for test storage... 00:32:25.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:25.649 12:19:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:30.924 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:30.924 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.924 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:30.925 Found net devices under 0000:af:00.0: cvl_0_0 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:30.925 Found net devices under 0000:af:00.1: cvl_0_1 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.925 12:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:30.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:32:30.925 00:32:30.925 --- 10.0.0.2 ping statistics --- 00:32:30.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.925 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:32:30.925 00:32:30.925 --- 10.0.0.1 ping statistics --- 00:32:30.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.925 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:30.925 12:19:08 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:34.214 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:34.214 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:34.782 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=153558 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 153558 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 153558 ']' 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.782 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:35.040 [2024-07-25 12:19:12.128935] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:32:35.040 [2024-07-25 12:19:12.128996] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.040 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.040 [2024-07-25 12:19:12.214250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:35.040 [2024-07-25 12:19:12.310140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.040 [2024-07-25 12:19:12.310183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.040 [2024-07-25 12:19:12.310194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.040 [2024-07-25 12:19:12.310203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.040 [2024-07-25 12:19:12.310210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.040 [2024-07-25 12:19:12.310262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.040 [2024-07-25 12:19:12.310376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:35.040 [2024-07-25 12:19:12.310489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.040 [2024-07-25 12:19:12.310489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:35.298 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.298 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:32:35.298 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:35.298 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:35.298 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:35.557 12:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:35.557 ************************************ 00:32:35.557 START TEST spdk_target_abort 00:32:35.557 ************************************ 00:32:35.557 12:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:32:35.557 12:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:35.557 12:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:32:35.557 12:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.557 12:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.903 spdk_targetn1 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.903 [2024-07-25 12:19:15.525038] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.903 [2024-07-25 12:19:15.565328] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.903 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:38.904 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.904 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:38.904 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.904 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.904 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:38.904 12:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.904 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.435 Initializing NVMe Controllers 00:32:41.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:41.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:41.435 Initialization complete. Launching workers. 00:32:41.435 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6496, failed: 0 00:32:41.435 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1202, failed to submit 5294 00:32:41.435 success 738, unsuccess 464, failed 0 00:32:41.435 12:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:41.435 12:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:41.693 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.980 Initializing NVMe Controllers 00:32:44.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:44.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:44.980 Initialization complete. Launching workers. 00:32:44.980 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8431, failed: 0 00:32:44.980 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7197 00:32:44.980 success 310, unsuccess 924, failed 0 00:32:44.980 12:19:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:44.980 12:19:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:44.980 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.265 Initializing NVMe Controllers 00:32:48.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:48.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:48.265 Initialization complete. Launching workers. 00:32:48.265 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17829, failed: 0 00:32:48.265 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1984, failed to submit 15845 00:32:48.265 success 142, unsuccess 1842, failed 0 00:32:48.265 12:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:48.265 12:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.265 12:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.265 12:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.265 12:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:48.265 12:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.265 12:19:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 153558 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 153558 ']' 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 153558 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153558 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153558' 00:32:49.641 killing process with pid 153558 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 153558 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 153558 00:32:49.641 00:32:49.641 real 0m14.249s 00:32:49.641 user 0m55.217s 00:32:49.641 sys 0m2.177s 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:49.641 12:19:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 ************************************ 00:32:49.641 END TEST spdk_target_abort 00:32:49.641 ************************************ 00:32:49.900 12:19:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:49.900 12:19:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:49.900 12:19:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:49.900 12:19:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:49.900 ************************************ 00:32:49.900 START TEST kernel_target_abort 00:32:49.900 ************************************ 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:49.900 12:19:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:49.900 12:19:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:49.900 12:19:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:52.436 Waiting for block devices as requested 00:32:52.695 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:52.695 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:52.695 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:52.954 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:52.954 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:52.954 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:53.213 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:53.213 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:53.213 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:53.213 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:53.471 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:53.471 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:53.471 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:53.731 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:53.731 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:53.731 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:53.990 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:53.990 No valid GPT data, bailing 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:53.990 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:54.249 00:32:54.249 Discovery Log Number of Records 2, Generation counter 2 00:32:54.249 =====Discovery Log Entry 0====== 00:32:54.249 trtype: tcp 00:32:54.249 adrfam: ipv4 00:32:54.249 subtype: current discovery subsystem 00:32:54.249 treq: not specified, sq flow control disable supported 00:32:54.249 portid: 1 00:32:54.249 trsvcid: 4420 00:32:54.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:54.249 traddr: 10.0.0.1 00:32:54.249 eflags: none 00:32:54.249 sectype: none 00:32:54.249 =====Discovery Log Entry 1====== 00:32:54.249 trtype: tcp 00:32:54.249 adrfam: ipv4 00:32:54.249 subtype: nvme subsystem 00:32:54.249 treq: not specified, sq flow control disable supported 00:32:54.249 portid: 1 00:32:54.249 trsvcid: 4420 00:32:54.249 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:54.249 traddr: 10.0.0.1 00:32:54.249 eflags: none 00:32:54.249 sectype: none 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:54.249 12:19:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:54.249 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.544 Initializing NVMe Controllers 00:32:57.544 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:57.544 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:57.544 Initialization complete. Launching workers. 00:32:57.544 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 46521, failed: 0 00:32:57.544 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 46521, failed to submit 0 00:32:57.544 success 0, unsuccess 46521, failed 0 00:32:57.544 12:19:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:57.544 12:19:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:57.544 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.868 Initializing NVMe Controllers 00:33:00.868 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:00.868 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:00.868 Initialization complete. Launching workers. 00:33:00.868 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79544, failed: 0 00:33:00.868 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20062, failed to submit 59482 00:33:00.868 success 0, unsuccess 20062, failed 0 00:33:00.868 12:19:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:00.868 12:19:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:00.868 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.400 Initializing NVMe Controllers 00:33:03.400 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:03.400 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:03.400 Initialization complete. Launching workers. 00:33:03.400 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76660, failed: 0 00:33:03.400 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19134, failed to submit 57526 00:33:03.400 success 0, unsuccess 19134, failed 0 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:03.400 12:19:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:06.688 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:06.688 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:07.256 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:33:07.514 00:33:07.514 real 0m17.624s 00:33:07.514 user 0m7.978s 00:33:07.514 sys 0m5.334s 00:33:07.514 12:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.514 12:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.514 ************************************ 00:33:07.515 END TEST kernel_target_abort 00:33:07.515 ************************************ 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:07.515 rmmod nvme_tcp 00:33:07.515 rmmod nvme_fabrics 00:33:07.515 rmmod nvme_keyring 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 153558 ']' 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 153558 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 153558 ']' 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 153558 00:33:07.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (153558) - No such process 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 153558 is not found' 00:33:07.515 Process with pid 153558 is not found 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:07.515 12:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:10.804 Waiting for block devices as requested 00:33:10.804 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:33:10.804 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:10.804 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:10.804 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:10.804 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:10.804 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:10.804 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:10.804 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:11.063 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:11.063 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:11.063 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:11.321 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:11.321 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:11.321 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:11.321 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:11.580 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:11.580 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:11.580 12:19:48 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:11.580 12:19:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:11.580 12:19:48 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:11.580 12:19:48 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:11.580 12:19:48 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.580 12:19:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:11.580 12:19:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.115 12:19:50 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:14.115 00:33:14.115 real 0m48.466s 00:33:14.115 user 1m7.523s 00:33:14.115 sys 0m16.108s 00:33:14.115 12:19:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:14.115 12:19:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:14.115 ************************************ 00:33:14.115 END TEST nvmf_abort_qd_sizes 00:33:14.115 ************************************ 00:33:14.115 12:19:50 -- spdk/autotest.sh@301 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:14.115 12:19:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:14.116 12:19:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:14.116 12:19:50 -- common/autotest_common.sh@10 -- # set +x 00:33:14.116 ************************************ 00:33:14.116 START TEST keyring_file 00:33:14.116 ************************************ 00:33:14.116 12:19:50 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:14.116 * Looking for test storage... 00:33:14.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.116 12:19:51 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.116 12:19:51 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.116 12:19:51 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.116 12:19:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.116 12:19:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.116 12:19:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.116 12:19:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:14.116 12:19:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AnYPZB5n7e 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AnYPZB5n7e 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AnYPZB5n7e 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AnYPZB5n7e 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QLEr51Yui9 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:14.116 12:19:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QLEr51Yui9 00:33:14.116 12:19:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QLEr51Yui9 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QLEr51Yui9 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=162940 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 162940 00:33:14.116 12:19:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:14.116 12:19:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 162940 ']' 00:33:14.116 12:19:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.116 12:19:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:14.116 12:19:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.116 12:19:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:14.116 12:19:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:14.116 [2024-07-25 12:19:51.296995] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:14.116 [2024-07-25 12:19:51.297057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162940 ] 00:33:14.116 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.116 [2024-07-25 12:19:51.378100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.375 [2024-07-25 12:19:51.464972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:14.634 12:19:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:14.634 [2024-07-25 12:19:51.689964] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.634 null0 00:33:14.634 [2024-07-25 12:19:51.722012] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:14.634 [2024-07-25 12:19:51.722396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:14.634 [2024-07-25 12:19:51.730016] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.634 12:19:51 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:14.634 [2024-07-25 12:19:51.742055] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:14.634 request: 00:33:14.634 { 00:33:14.634 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:14.634 "secure_channel": false, 00:33:14.634 "listen_address": { 00:33:14.634 "trtype": "tcp", 00:33:14.634 "traddr": "127.0.0.1", 00:33:14.634 "trsvcid": "4420" 00:33:14.634 }, 00:33:14.634 "method": "nvmf_subsystem_add_listener", 00:33:14.634 "req_id": 1 00:33:14.634 } 00:33:14.634 Got JSON-RPC error response 00:33:14.634 response: 00:33:14.634 { 00:33:14.634 "code": -32602, 00:33:14.634 "message": "Invalid parameters" 00:33:14.634 } 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:14.634 12:19:51 keyring_file -- keyring/file.sh@46 -- # bperfpid=162953 00:33:14.634 12:19:51 keyring_file -- keyring/file.sh@48 -- # waitforlisten 162953 /var/tmp/bperf.sock 00:33:14.634 12:19:51 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 162953 ']' 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:14.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:14.634 12:19:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:14.634 [2024-07-25 12:19:51.799317] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:14.634 [2024-07-25 12:19:51.799379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162953 ] 00:33:14.634 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.634 [2024-07-25 12:19:51.880952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.892 [2024-07-25 12:19:51.986005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.461 12:19:52 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:15.462 12:19:52 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:15.462 12:19:52 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:15.462 12:19:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:15.721 12:19:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QLEr51Yui9 00:33:15.721 12:19:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QLEr51Yui9 00:33:16.288 12:19:53 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:16.288 12:19:53 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:16.288 12:19:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.288 12:19:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.288 12:19:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.288 12:19:53 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.AnYPZB5n7e == \/\t\m\p\/\t\m\p\.\A\n\Y\P\Z\B\5\n\7\e ]] 00:33:16.288 12:19:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:16.288 12:19:53 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:16.288 12:19:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.288 12:19:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:16.288 12:19:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.546 12:19:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QLEr51Yui9 == \/\t\m\p\/\t\m\p\.\Q\L\E\r\5\1\Y\u\i\9 ]] 00:33:16.546 12:19:53 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:16.546 12:19:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:16.546 12:19:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.546 12:19:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.546 12:19:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.546 12:19:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.805 12:19:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:16.805 12:19:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:16.805 12:19:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:16.805 12:19:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.805 12:19:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.805 12:19:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:16.805 12:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.064 12:19:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:17.064 12:19:54 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.064 12:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.324 [2024-07-25 12:19:54.542504] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:17.324 nvme0n1 00:33:17.582 12:19:54 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:17.582 12:19:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:17.582 12:19:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:17.582 12:19:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:17.582 12:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.582 12:19:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:17.582 12:19:54 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:17.583 12:19:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:17.583 12:19:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:17.583 12:19:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:17.583 12:19:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:17.583 12:19:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:17.583 12:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.841 12:19:54 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:17.841 12:19:54 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:17.841 Running I/O for 1 seconds... 00:33:19.218 00:33:19.218 Latency(us) 00:33:19.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.218 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:19.218 nvme0n1 : 1.01 6371.58 24.89 0.00 0.00 19999.90 6255.71 27167.65 00:33:19.218 =================================================================================================================== 00:33:19.218 Total : 6371.58 24.89 0.00 0.00 19999.90 6255.71 27167.65 00:33:19.218 0 00:33:19.218 12:19:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:19.218 12:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:19.218 12:19:56 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:19.218 12:19:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:19.218 12:19:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.218 12:19:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.218 12:19:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:19.218 12:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.483 12:19:56 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:19.483 12:19:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:19.483 12:19:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.483 12:19:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:19.483 12:19:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.483 12:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.483 12:19:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.769 12:19:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:19.769 12:19:56 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:19.769 12:19:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:19.769 12:19:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:19.769 12:19:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:19.769 12:19:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.769 12:19:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:19.769 12:19:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.769 12:19:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:19.769 12:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:20.027 [2024-07-25 12:19:57.158781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:20.027 [2024-07-25 12:19:57.159751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0cd0 (107): Transport endpoint is not connected 00:33:20.027 [2024-07-25 12:19:57.160740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0cd0 (9): Bad file descriptor 00:33:20.027 [2024-07-25 12:19:57.161740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:20.027 [2024-07-25 12:19:57.161757] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:20.027 [2024-07-25 12:19:57.161770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:20.027 request: 00:33:20.027 { 00:33:20.027 "name": "nvme0", 00:33:20.027 "trtype": "tcp", 00:33:20.027 "traddr": "127.0.0.1", 00:33:20.027 "adrfam": "ipv4", 00:33:20.027 "trsvcid": "4420", 00:33:20.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.027 "prchk_reftag": false, 00:33:20.027 "prchk_guard": false, 00:33:20.027 "hdgst": false, 00:33:20.027 "ddgst": false, 00:33:20.027 "psk": "key1", 00:33:20.027 "method": "bdev_nvme_attach_controller", 00:33:20.027 "req_id": 1 00:33:20.027 } 00:33:20.027 Got JSON-RPC error response 00:33:20.027 response: 00:33:20.027 { 00:33:20.027 "code": -5, 00:33:20.027 "message": "Input/output error" 00:33:20.027 } 00:33:20.027 12:19:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:20.028 12:19:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:20.028 12:19:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:20.028 12:19:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:20.028 12:19:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:20.028 12:19:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:20.028 12:19:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:20.028 12:19:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:20.028 12:19:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:20.028 12:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.284 12:19:57 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:20.284 12:19:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:20.284 12:19:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:20.284 12:19:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:20.284 12:19:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:20.284 12:19:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:20.284 12:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.540 12:19:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:20.540 12:19:57 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:20.540 12:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:20.798 12:19:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:20.798 12:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:21.056 12:19:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:21.056 12:19:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:21.056 12:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.314 12:19:58 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:21.314 12:19:58 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.AnYPZB5n7e 00:33:21.314 12:19:58 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:21.314 12:19:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:21.314 12:19:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:21.314 12:19:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:21.314 12:19:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:21.314 12:19:58 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:21.314 12:19:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:21.314 12:19:58 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:21.314 12:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:21.572 [2024-07-25 12:19:58.676709] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AnYPZB5n7e': 0100660 00:33:21.572 [2024-07-25 12:19:58.676748] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:21.572 request: 00:33:21.572 { 00:33:21.572 "name": "key0", 00:33:21.572 "path": "/tmp/tmp.AnYPZB5n7e", 00:33:21.572 "method": "keyring_file_add_key", 00:33:21.572 "req_id": 1 00:33:21.572 } 00:33:21.572 Got JSON-RPC error response 00:33:21.572 response: 00:33:21.572 { 00:33:21.572 "code": -1, 00:33:21.572 "message": "Operation not permitted" 00:33:21.572 } 00:33:21.572 12:19:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:21.572 12:19:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:21.572 12:19:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:21.572 12:19:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:21.572 12:19:58 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.AnYPZB5n7e 00:33:21.572 12:19:58 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:21.572 12:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AnYPZB5n7e 00:33:21.831 12:19:58 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.AnYPZB5n7e 00:33:21.831 12:19:58 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:21.831 12:19:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:21.831 12:19:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.831 12:19:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.831 12:19:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:21.831 12:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.089 12:19:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:22.089 12:19:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.089 12:19:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:22.089 12:19:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.089 12:19:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:22.089 12:19:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:22.089 12:19:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:22.089 12:19:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:22.089 12:19:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.089 12:19:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.348 [2024-07-25 12:19:59.430799] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AnYPZB5n7e': No such file or directory 00:33:22.348 [2024-07-25 12:19:59.430832] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:22.348 [2024-07-25 12:19:59.430869] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:22.348 [2024-07-25 12:19:59.430879] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:22.348 [2024-07-25 12:19:59.430890] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:22.348 request: 00:33:22.348 { 00:33:22.348 "name": "nvme0", 00:33:22.348 "trtype": "tcp", 00:33:22.348 "traddr": "127.0.0.1", 00:33:22.348 "adrfam": "ipv4", 00:33:22.348 "trsvcid": "4420", 00:33:22.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:22.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:22.348 "prchk_reftag": false, 00:33:22.348 "prchk_guard": false, 00:33:22.348 "hdgst": false, 00:33:22.348 "ddgst": false, 00:33:22.348 "psk": "key0", 00:33:22.348 "method": "bdev_nvme_attach_controller", 00:33:22.348 "req_id": 1 00:33:22.348 } 00:33:22.348 Got JSON-RPC error response 00:33:22.348 response: 00:33:22.348 { 00:33:22.348 "code": -19, 00:33:22.348 "message": "No such device" 00:33:22.348 } 00:33:22.348 12:19:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:22.348 12:19:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:22.348 12:19:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:22.348 12:19:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:22.348 12:19:59 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:22.348 12:19:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:22.607 12:19:59 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xJxdrtNGY4 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:22.607 12:19:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:22.607 12:19:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:22.607 12:19:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:22.607 12:19:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:22.607 12:19:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:22.607 12:19:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xJxdrtNGY4 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xJxdrtNGY4 00:33:22.607 12:19:59 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.xJxdrtNGY4 00:33:22.607 12:19:59 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xJxdrtNGY4 00:33:22.607 12:19:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xJxdrtNGY4 00:33:22.865 12:20:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.865 12:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.124 nvme0n1 00:33:23.124 12:20:00 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:23.124 12:20:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:23.124 12:20:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.124 12:20:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.124 12:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.124 12:20:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.382 12:20:00 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:23.382 12:20:00 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:23.382 12:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:23.641 12:20:00 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:23.641 12:20:00 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:23.641 12:20:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.641 12:20:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.641 12:20:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.899 12:20:01 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:23.899 12:20:01 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:23.899 12:20:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:23.899 12:20:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.899 12:20:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.899 12:20:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.899 12:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.158 12:20:01 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:24.158 12:20:01 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:24.158 12:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:24.417 12:20:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:24.417 12:20:01 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:24.417 12:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.675 12:20:01 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:24.675 12:20:01 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xJxdrtNGY4 00:33:24.675 12:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xJxdrtNGY4 00:33:24.933 12:20:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QLEr51Yui9 00:33:24.933 12:20:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QLEr51Yui9 00:33:25.192 12:20:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:25.192 12:20:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:25.451 nvme0n1 00:33:25.451 12:20:02 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:25.451 12:20:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:26.019 12:20:03 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:26.019 "subsystems": [ 00:33:26.019 { 00:33:26.019 "subsystem": "keyring", 00:33:26.019 "config": [ 00:33:26.019 { 00:33:26.019 "method": "keyring_file_add_key", 00:33:26.019 "params": { 00:33:26.019 "name": "key0", 00:33:26.019 "path": "/tmp/tmp.xJxdrtNGY4" 00:33:26.019 } 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "method": "keyring_file_add_key", 00:33:26.019 "params": { 00:33:26.019 "name": "key1", 00:33:26.019 "path": "/tmp/tmp.QLEr51Yui9" 00:33:26.019 } 00:33:26.019 } 00:33:26.019 ] 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "subsystem": "iobuf", 00:33:26.019 "config": [ 00:33:26.019 { 00:33:26.019 "method": "iobuf_set_options", 00:33:26.019 "params": { 00:33:26.019 "small_pool_count": 8192, 00:33:26.019 "large_pool_count": 1024, 00:33:26.019 "small_bufsize": 8192, 00:33:26.019 "large_bufsize": 135168 00:33:26.019 } 00:33:26.019 } 00:33:26.019 ] 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "subsystem": "sock", 00:33:26.019 "config": [ 00:33:26.019 { 00:33:26.019 "method": "sock_set_default_impl", 00:33:26.019 "params": { 00:33:26.019 "impl_name": "posix" 00:33:26.019 } 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "method": "sock_impl_set_options", 00:33:26.019 "params": { 00:33:26.019 "impl_name": "ssl", 00:33:26.019 "recv_buf_size": 4096, 00:33:26.019 "send_buf_size": 4096, 00:33:26.019 "enable_recv_pipe": true, 00:33:26.019 "enable_quickack": false, 00:33:26.019 "enable_placement_id": 0, 00:33:26.019 "enable_zerocopy_send_server": true, 00:33:26.019 "enable_zerocopy_send_client": false, 00:33:26.019 "zerocopy_threshold": 0, 00:33:26.019 "tls_version": 0, 00:33:26.019 "enable_ktls": false 00:33:26.019 } 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "method": "sock_impl_set_options", 00:33:26.019 "params": { 00:33:26.019 "impl_name": "posix", 00:33:26.019 "recv_buf_size": 2097152, 00:33:26.019 "send_buf_size": 2097152, 00:33:26.019 "enable_recv_pipe": true, 00:33:26.019 "enable_quickack": false, 00:33:26.019 "enable_placement_id": 0, 00:33:26.019 "enable_zerocopy_send_server": true, 00:33:26.019 "enable_zerocopy_send_client": false, 00:33:26.019 "zerocopy_threshold": 0, 00:33:26.019 "tls_version": 0, 00:33:26.019 "enable_ktls": false 00:33:26.019 } 00:33:26.019 } 00:33:26.019 ] 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "subsystem": "vmd", 00:33:26.019 "config": [] 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "subsystem": "accel", 00:33:26.019 "config": [ 00:33:26.019 { 00:33:26.019 "method": "accel_set_options", 00:33:26.019 "params": { 00:33:26.019 "small_cache_size": 128, 00:33:26.019 "large_cache_size": 16, 00:33:26.019 "task_count": 2048, 00:33:26.019 "sequence_count": 2048, 00:33:26.019 "buf_count": 2048 00:33:26.019 } 00:33:26.019 } 00:33:26.019 ] 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "subsystem": "bdev", 00:33:26.019 "config": [ 00:33:26.019 { 00:33:26.019 "method": "bdev_set_options", 00:33:26.019 "params": { 00:33:26.019 "bdev_io_pool_size": 65535, 00:33:26.019 "bdev_io_cache_size": 256, 00:33:26.019 "bdev_auto_examine": true, 00:33:26.019 "iobuf_small_cache_size": 128, 00:33:26.019 "iobuf_large_cache_size": 16 00:33:26.019 } 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "method": "bdev_raid_set_options", 00:33:26.019 "params": { 00:33:26.019 "process_window_size_kb": 1024, 00:33:26.019 "process_max_bandwidth_mb_sec": 0 00:33:26.019 } 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "method": "bdev_iscsi_set_options", 00:33:26.019 "params": { 00:33:26.019 "timeout_sec": 30 00:33:26.019 } 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "method": "bdev_nvme_set_options", 00:33:26.019 "params": { 00:33:26.019 "action_on_timeout": "none", 00:33:26.019 "timeout_us": 0, 00:33:26.019 "timeout_admin_us": 0, 00:33:26.019 "keep_alive_timeout_ms": 10000, 00:33:26.019 "arbitration_burst": 0, 00:33:26.019 "low_priority_weight": 0, 00:33:26.019 "medium_priority_weight": 0, 00:33:26.019 "high_priority_weight": 0, 00:33:26.019 "nvme_adminq_poll_period_us": 10000, 00:33:26.019 "nvme_ioq_poll_period_us": 0, 00:33:26.019 "io_queue_requests": 512, 00:33:26.019 "delay_cmd_submit": true, 00:33:26.019 "transport_retry_count": 4, 00:33:26.019 "bdev_retry_count": 3, 00:33:26.019 "transport_ack_timeout": 0, 00:33:26.019 "ctrlr_loss_timeout_sec": 0, 00:33:26.019 "reconnect_delay_sec": 0, 00:33:26.019 "fast_io_fail_timeout_sec": 0, 00:33:26.019 "disable_auto_failback": false, 00:33:26.019 "generate_uuids": false, 00:33:26.019 "transport_tos": 0, 00:33:26.019 "nvme_error_stat": false, 00:33:26.019 "rdma_srq_size": 0, 00:33:26.019 "io_path_stat": false, 00:33:26.019 "allow_accel_sequence": false, 00:33:26.019 "rdma_max_cq_size": 0, 00:33:26.019 "rdma_cm_event_timeout_ms": 0, 00:33:26.019 "dhchap_digests": [ 00:33:26.019 "sha256", 00:33:26.019 "sha384", 00:33:26.019 "sha512" 00:33:26.020 ], 00:33:26.020 "dhchap_dhgroups": [ 00:33:26.020 "null", 00:33:26.020 "ffdhe2048", 00:33:26.020 "ffdhe3072", 00:33:26.020 "ffdhe4096", 00:33:26.020 "ffdhe6144", 00:33:26.020 "ffdhe8192" 00:33:26.020 ] 00:33:26.020 } 00:33:26.020 }, 00:33:26.020 { 00:33:26.020 "method": "bdev_nvme_attach_controller", 00:33:26.020 "params": { 00:33:26.020 "name": "nvme0", 00:33:26.020 "trtype": "TCP", 00:33:26.020 "adrfam": "IPv4", 00:33:26.020 "traddr": "127.0.0.1", 00:33:26.020 "trsvcid": "4420", 00:33:26.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.020 "prchk_reftag": false, 00:33:26.020 "prchk_guard": false, 00:33:26.020 "ctrlr_loss_timeout_sec": 0, 00:33:26.020 "reconnect_delay_sec": 0, 00:33:26.020 "fast_io_fail_timeout_sec": 0, 00:33:26.020 "psk": "key0", 00:33:26.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.020 "hdgst": false, 00:33:26.020 "ddgst": false 00:33:26.020 } 00:33:26.020 }, 00:33:26.020 { 00:33:26.020 "method": "bdev_nvme_set_hotplug", 00:33:26.020 "params": { 00:33:26.020 "period_us": 100000, 00:33:26.020 "enable": false 00:33:26.020 } 00:33:26.020 }, 00:33:26.020 { 00:33:26.020 "method": "bdev_wait_for_examine" 00:33:26.020 } 00:33:26.020 ] 00:33:26.020 }, 00:33:26.020 { 00:33:26.020 "subsystem": "nbd", 00:33:26.020 "config": [] 00:33:26.020 } 00:33:26.020 ] 00:33:26.020 }' 00:33:26.020 12:20:03 keyring_file -- keyring/file.sh@114 -- # killprocess 162953 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 162953 ']' 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@954 -- # kill -0 162953 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162953 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162953' 00:33:26.020 killing process with pid 162953 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@969 -- # kill 162953 00:33:26.020 Received shutdown signal, test time was about 1.000000 seconds 00:33:26.020 00:33:26.020 Latency(us) 00:33:26.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.020 =================================================================================================================== 00:33:26.020 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.020 12:20:03 keyring_file -- common/autotest_common.sh@974 -- # wait 162953 00:33:26.284 12:20:03 keyring_file -- keyring/file.sh@117 -- # bperfpid=165066 00:33:26.284 12:20:03 keyring_file -- keyring/file.sh@119 -- # waitforlisten 165066 /var/tmp/bperf.sock 00:33:26.284 12:20:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 165066 ']' 00:33:26.284 12:20:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.285 12:20:03 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:26.285 12:20:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:26.285 12:20:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.285 12:20:03 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:26.285 "subsystems": [ 00:33:26.285 { 00:33:26.285 "subsystem": "keyring", 00:33:26.285 "config": [ 00:33:26.285 { 00:33:26.285 "method": "keyring_file_add_key", 00:33:26.285 "params": { 00:33:26.285 "name": "key0", 00:33:26.285 "path": "/tmp/tmp.xJxdrtNGY4" 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "keyring_file_add_key", 00:33:26.285 "params": { 00:33:26.285 "name": "key1", 00:33:26.285 "path": "/tmp/tmp.QLEr51Yui9" 00:33:26.285 } 00:33:26.285 } 00:33:26.285 ] 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "subsystem": "iobuf", 00:33:26.285 "config": [ 00:33:26.285 { 00:33:26.285 "method": "iobuf_set_options", 00:33:26.285 "params": { 00:33:26.285 "small_pool_count": 8192, 00:33:26.285 "large_pool_count": 1024, 00:33:26.285 "small_bufsize": 8192, 00:33:26.285 "large_bufsize": 135168 00:33:26.285 } 00:33:26.285 } 00:33:26.285 ] 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "subsystem": "sock", 00:33:26.285 "config": [ 00:33:26.285 { 00:33:26.285 "method": "sock_set_default_impl", 00:33:26.285 "params": { 00:33:26.285 "impl_name": "posix" 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "sock_impl_set_options", 00:33:26.285 "params": { 00:33:26.285 "impl_name": "ssl", 00:33:26.285 "recv_buf_size": 4096, 00:33:26.285 "send_buf_size": 4096, 00:33:26.285 "enable_recv_pipe": true, 00:33:26.285 "enable_quickack": false, 00:33:26.285 "enable_placement_id": 0, 00:33:26.285 "enable_zerocopy_send_server": true, 00:33:26.285 "enable_zerocopy_send_client": false, 00:33:26.285 "zerocopy_threshold": 0, 00:33:26.285 "tls_version": 0, 00:33:26.285 "enable_ktls": false 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "sock_impl_set_options", 00:33:26.285 "params": { 00:33:26.285 "impl_name": "posix", 00:33:26.285 "recv_buf_size": 2097152, 00:33:26.285 "send_buf_size": 2097152, 00:33:26.285 "enable_recv_pipe": true, 00:33:26.285 "enable_quickack": false, 00:33:26.285 "enable_placement_id": 0, 00:33:26.285 "enable_zerocopy_send_server": true, 00:33:26.285 "enable_zerocopy_send_client": false, 00:33:26.285 "zerocopy_threshold": 0, 00:33:26.285 "tls_version": 0, 00:33:26.285 "enable_ktls": false 00:33:26.285 } 00:33:26.285 } 00:33:26.285 ] 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "subsystem": "vmd", 00:33:26.285 "config": [] 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "subsystem": "accel", 00:33:26.285 "config": [ 00:33:26.285 { 00:33:26.285 "method": "accel_set_options", 00:33:26.285 "params": { 00:33:26.285 "small_cache_size": 128, 00:33:26.285 "large_cache_size": 16, 00:33:26.285 "task_count": 2048, 00:33:26.285 "sequence_count": 2048, 00:33:26.285 "buf_count": 2048 00:33:26.285 } 00:33:26.285 } 00:33:26.285 ] 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "subsystem": "bdev", 00:33:26.285 "config": [ 00:33:26.285 { 00:33:26.285 "method": "bdev_set_options", 00:33:26.285 "params": { 00:33:26.285 "bdev_io_pool_size": 65535, 00:33:26.285 "bdev_io_cache_size": 256, 00:33:26.285 "bdev_auto_examine": true, 00:33:26.285 "iobuf_small_cache_size": 128, 00:33:26.285 "iobuf_large_cache_size": 16 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "bdev_raid_set_options", 00:33:26.285 "params": { 00:33:26.285 "process_window_size_kb": 1024, 00:33:26.285 "process_max_bandwidth_mb_sec": 0 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "bdev_iscsi_set_options", 00:33:26.285 "params": { 00:33:26.285 "timeout_sec": 30 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "bdev_nvme_set_options", 00:33:26.285 "params": { 00:33:26.285 "action_on_timeout": "none", 00:33:26.285 "timeout_us": 0, 00:33:26.285 "timeout_admin_us": 0, 00:33:26.285 "keep_alive_timeout_ms": 10000, 00:33:26.285 "arbitration_burst": 0, 00:33:26.285 "low_priority_weight": 0, 00:33:26.285 "medium_priority_weight": 0, 00:33:26.285 "high_priority_weight": 0, 00:33:26.285 "nvme_adminq_poll_period_us": 10000, 00:33:26.285 "nvme_ioq_poll_period_us": 0, 00:33:26.285 "io_queue_requests": 512, 00:33:26.285 "delay_cmd_submit": true, 00:33:26.285 "transport_retry_count": 4, 00:33:26.285 "bdev_retry_count": 3, 00:33:26.285 "transport_ack_timeout": 0, 00:33:26.285 "ctrlr_loss_timeout_sec": 0, 00:33:26.285 "reconnect_delay_sec": 0, 00:33:26.285 "fast_io_fail_timeout_sec": 0, 00:33:26.285 "disable_auto_failback": false, 00:33:26.285 "generate_uuids": false, 00:33:26.285 "transport_tos": 0, 00:33:26.285 "nvme_error_stat": false, 00:33:26.285 "rdma_srq_size": 0, 00:33:26.285 "io_path_stat": false, 00:33:26.285 "allow_accel_sequence": false, 00:33:26.285 "rdma_max_cq_size": 0, 00:33:26.285 "rdma_cm_event_timeout_ms": 0, 00:33:26.285 "dhchap_digests": [ 00:33:26.285 "sha256", 00:33:26.285 "sha384", 00:33:26.285 "sha512" 00:33:26.285 ], 00:33:26.285 "dhchap_dhgroups": [ 00:33:26.285 "null", 00:33:26.285 "ffdhe2048", 00:33:26.285 "ffdhe3072", 00:33:26.285 "ffdhe4096", 00:33:26.285 "ffdhe6144", 00:33:26.285 "ffdhe8192" 00:33:26.285 ] 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "bdev_nvme_attach_controller", 00:33:26.285 "params": { 00:33:26.285 "name": "nvme0", 00:33:26.285 "trtype": "TCP", 00:33:26.285 "adrfam": "IPv4", 00:33:26.285 "traddr": "127.0.0.1", 00:33:26.285 "trsvcid": "4420", 00:33:26.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.285 "prchk_reftag": false, 00:33:26.285 "prchk_guard": false, 00:33:26.285 "ctrlr_loss_timeout_sec": 0, 00:33:26.285 "reconnect_delay_sec": 0, 00:33:26.285 "fast_io_fail_timeout_sec": 0, 00:33:26.285 "psk": "key0", 00:33:26.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.285 "hdgst": false, 00:33:26.285 "ddgst": false 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "bdev_nvme_set_hotplug", 00:33:26.285 "params": { 00:33:26.285 "period_us": 100000, 00:33:26.285 "enable": false 00:33:26.285 } 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "method": "bdev_wait_for_examine" 00:33:26.285 } 00:33:26.285 ] 00:33:26.285 }, 00:33:26.285 { 00:33:26.285 "subsystem": "nbd", 00:33:26.285 "config": [] 00:33:26.285 } 00:33:26.285 ] 00:33:26.285 }' 00:33:26.285 12:20:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:26.285 12:20:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:26.285 [2024-07-25 12:20:03.375033] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:26.285 [2024-07-25 12:20:03.375094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165066 ] 00:33:26.285 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.285 [2024-07-25 12:20:03.455997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.285 [2024-07-25 12:20:03.561264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.543 [2024-07-25 12:20:03.734597] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:27.110 12:20:04 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:27.110 12:20:04 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:27.110 12:20:04 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:27.110 12:20:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.110 12:20:04 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:27.368 12:20:04 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:27.368 12:20:04 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:27.368 12:20:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:27.368 12:20:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:27.368 12:20:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:27.368 12:20:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.368 12:20:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.626 12:20:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:27.626 12:20:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:27.626 12:20:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:27.626 12:20:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:27.626 12:20:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.626 12:20:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:27.626 12:20:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.884 12:20:05 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:27.884 12:20:05 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:27.884 12:20:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:27.884 12:20:05 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:28.143 12:20:05 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:28.143 12:20:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:28.143 12:20:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.xJxdrtNGY4 /tmp/tmp.QLEr51Yui9 00:33:28.143 12:20:05 keyring_file -- keyring/file.sh@20 -- # killprocess 165066 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 165066 ']' 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 165066 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165066 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165066' 00:33:28.143 killing process with pid 165066 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@969 -- # kill 165066 00:33:28.143 Received shutdown signal, test time was about 1.000000 seconds 00:33:28.143 00:33:28.143 Latency(us) 00:33:28.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.143 =================================================================================================================== 00:33:28.143 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:28.143 12:20:05 keyring_file -- common/autotest_common.sh@974 -- # wait 165066 00:33:28.401 12:20:05 keyring_file -- keyring/file.sh@21 -- # killprocess 162940 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 162940 ']' 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 162940 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162940 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162940' 00:33:28.401 killing process with pid 162940 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@969 -- # kill 162940 00:33:28.401 [2024-07-25 12:20:05.647556] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:28.401 12:20:05 keyring_file -- common/autotest_common.sh@974 -- # wait 162940 00:33:28.970 00:33:28.970 real 0m15.000s 00:33:28.970 user 0m37.588s 00:33:28.970 sys 0m3.124s 00:33:28.970 12:20:05 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.970 12:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:28.970 ************************************ 00:33:28.970 END TEST keyring_file 00:33:28.970 ************************************ 00:33:28.970 12:20:06 -- spdk/autotest.sh@302 -- # [[ y == y ]] 00:33:28.970 12:20:06 -- spdk/autotest.sh@303 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:28.970 12:20:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:28.970 12:20:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.970 12:20:06 -- common/autotest_common.sh@10 -- # set +x 00:33:28.970 ************************************ 00:33:28.970 START TEST keyring_linux 00:33:28.970 ************************************ 00:33:28.970 12:20:06 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:28.970 * Looking for test storage... 00:33:28.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.970 12:20:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.970 12:20:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.970 12:20:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.970 12:20:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.970 12:20:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.970 12:20:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.970 12:20:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:28.970 12:20:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:28.970 /tmp/:spdk-test:key0 00:33:28.970 12:20:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:28.970 12:20:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:28.970 12:20:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:29.230 12:20:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:29.230 12:20:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:29.230 /tmp/:spdk-test:key1 00:33:29.230 12:20:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=165619 00:33:29.230 12:20:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 165619 00:33:29.230 12:20:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:29.230 12:20:06 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 165619 ']' 00:33:29.230 12:20:06 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.230 12:20:06 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:29.230 12:20:06 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.230 12:20:06 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:29.230 12:20:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:29.230 [2024-07-25 12:20:06.351287] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:29.230 [2024-07-25 12:20:06.351352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165619 ] 00:33:29.230 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.230 [2024-07-25 12:20:06.433526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.230 [2024-07-25 12:20:06.526426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.798 12:20:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:29.798 12:20:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:29.798 12:20:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:29.798 12:20:06 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.798 12:20:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:29.798 [2024-07-25 12:20:06.808301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.798 null0 00:33:29.798 [2024-07-25 12:20:06.840348] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:29.798 [2024-07-25 12:20:06.840741] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:29.798 12:20:06 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.799 12:20:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:29.799 168579104 00:33:29.799 12:20:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:29.799 864874576 00:33:29.799 12:20:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=165815 00:33:29.799 12:20:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 165815 /var/tmp/bperf.sock 00:33:29.799 12:20:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:29.799 12:20:06 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 165815 ']' 00:33:29.799 12:20:06 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.799 12:20:06 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:29.799 12:20:06 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.799 12:20:06 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:29.799 12:20:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:29.799 [2024-07-25 12:20:06.915888] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:29.799 [2024-07-25 12:20:06.915943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165815 ] 00:33:29.799 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.799 [2024-07-25 12:20:06.996419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.057 [2024-07-25 12:20:07.103620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.316 12:20:07 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.316 12:20:07 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:30.316 12:20:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:30.316 12:20:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:30.575 12:20:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:30.575 12:20:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:31.144 12:20:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:31.144 12:20:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:31.713 [2024-07-25 12:20:08.906421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:31.713 nvme0n1 00:33:31.713 12:20:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:31.713 12:20:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:31.713 12:20:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:31.713 12:20:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:31.713 12:20:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:31.713 12:20:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.972 12:20:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:31.972 12:20:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:31.972 12:20:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:31.972 12:20:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:31.972 12:20:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.972 12:20:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:31.972 12:20:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.231 12:20:09 keyring_linux -- keyring/linux.sh@25 -- # sn=168579104 00:33:32.231 12:20:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:32.231 12:20:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:32.231 12:20:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 168579104 == \1\6\8\5\7\9\1\0\4 ]] 00:33:32.231 12:20:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 168579104 00:33:32.231 12:20:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:32.231 12:20:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:32.490 Running I/O for 1 seconds... 00:33:33.427 00:33:33.427 Latency(us) 00:33:33.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.427 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:33.427 nvme0n1 : 1.01 6381.38 24.93 0.00 0.00 19909.07 9472.93 30146.56 00:33:33.427 =================================================================================================================== 00:33:33.427 Total : 6381.38 24.93 0.00 0.00 19909.07 9472.93 30146.56 00:33:33.427 0 00:33:33.427 12:20:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:33.427 12:20:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:34.044 12:20:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:34.044 12:20:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:34.044 12:20:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:34.044 12:20:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:34.044 12:20:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:34.044 12:20:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.303 12:20:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:34.303 12:20:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:34.303 12:20:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:34.303 12:20:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:34.303 12:20:11 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:34.303 12:20:11 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:34.303 12:20:11 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:34.303 12:20:11 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:34.303 12:20:11 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:34.303 12:20:11 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:34.303 12:20:11 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:34.303 12:20:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:34.563 [2024-07-25 12:20:11.656545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:34.563 [2024-07-25 12:20:11.656716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebfc10 (107): Transport endpoint is not connected 00:33:34.563 [2024-07-25 12:20:11.657706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebfc10 (9): Bad file descriptor 00:33:34.563 [2024-07-25 12:20:11.658706] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.563 [2024-07-25 12:20:11.658723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:34.563 [2024-07-25 12:20:11.658736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.563 request: 00:33:34.563 { 00:33:34.563 "name": "nvme0", 00:33:34.563 "trtype": "tcp", 00:33:34.563 "traddr": "127.0.0.1", 00:33:34.563 "adrfam": "ipv4", 00:33:34.563 "trsvcid": "4420", 00:33:34.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.563 "prchk_reftag": false, 00:33:34.563 "prchk_guard": false, 00:33:34.563 "hdgst": false, 00:33:34.563 "ddgst": false, 00:33:34.563 "psk": ":spdk-test:key1", 00:33:34.563 "method": "bdev_nvme_attach_controller", 00:33:34.563 "req_id": 1 00:33:34.563 } 00:33:34.563 Got JSON-RPC error response 00:33:34.563 response: 00:33:34.563 { 00:33:34.563 "code": -5, 00:33:34.563 "message": "Input/output error" 00:33:34.563 } 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@33 -- # sn=168579104 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 168579104 00:33:34.563 1 links removed 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@33 -- # sn=864874576 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 864874576 00:33:34.563 1 links removed 00:33:34.563 12:20:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 165815 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 165815 ']' 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 165815 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165815 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165815' 00:33:34.563 killing process with pid 165815 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@969 -- # kill 165815 00:33:34.563 Received shutdown signal, test time was about 1.000000 seconds 00:33:34.563 00:33:34.563 Latency(us) 00:33:34.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.563 =================================================================================================================== 00:33:34.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.563 12:20:11 keyring_linux -- common/autotest_common.sh@974 -- # wait 165815 00:33:34.822 12:20:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 165619 00:33:34.822 12:20:11 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 165619 ']' 00:33:34.822 12:20:11 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 165619 00:33:34.822 12:20:11 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:34.822 12:20:11 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.822 12:20:11 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165619 00:33:34.822 12:20:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:34.822 12:20:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:34.822 12:20:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165619' 00:33:34.822 killing process with pid 165619 00:33:34.822 12:20:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 165619 00:33:34.822 12:20:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 165619 00:33:35.082 00:33:35.082 real 0m6.290s 00:33:35.082 user 0m13.365s 00:33:35.082 sys 0m1.610s 00:33:35.082 12:20:12 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:35.082 12:20:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:35.082 ************************************ 00:33:35.082 END TEST keyring_linux 00:33:35.082 ************************************ 00:33:35.341 12:20:12 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@322 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@336 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:33:35.341 12:20:12 -- spdk/autotest.sh@362 -- # '[' 0 -eq 1 ']' 00:33:35.342 12:20:12 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:35.342 12:20:12 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:35.342 12:20:12 -- spdk/autotest.sh@377 -- # [[ 0 -eq 1 ]] 00:33:35.342 12:20:12 -- spdk/autotest.sh@382 -- # trap - SIGINT SIGTERM EXIT 00:33:35.342 12:20:12 -- spdk/autotest.sh@384 -- # timing_enter post_cleanup 00:33:35.342 12:20:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:35.342 12:20:12 -- common/autotest_common.sh@10 -- # set +x 00:33:35.342 12:20:12 -- spdk/autotest.sh@385 -- # autotest_cleanup 00:33:35.342 12:20:12 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:35.342 12:20:12 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:35.342 12:20:12 -- common/autotest_common.sh@10 -- # set +x 00:33:40.614 INFO: APP EXITING 00:33:40.614 INFO: killing all VMs 00:33:40.614 INFO: killing vhost app 00:33:40.614 WARN: no vhost pid file found 00:33:40.614 INFO: EXIT DONE 00:33:43.151 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:33:43.151 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:43.151 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:43.151 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:43.151 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:43.151 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:43.411 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:46.702 Cleaning 00:33:46.702 Removing: /var/run/dpdk/spdk0/config 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:46.702 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:46.702 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:46.702 Removing: /var/run/dpdk/spdk1/config 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:46.702 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:46.702 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:46.702 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:46.702 Removing: /var/run/dpdk/spdk2/config 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:46.702 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:46.702 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:46.702 Removing: /var/run/dpdk/spdk3/config 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:46.702 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:46.702 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:46.702 Removing: /var/run/dpdk/spdk4/config 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:46.702 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:46.702 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:46.702 Removing: /dev/shm/bdev_svc_trace.1 00:33:46.702 Removing: /dev/shm/nvmf_trace.0 00:33:46.702 Removing: /dev/shm/spdk_tgt_trace.pid3931468 00:33:46.702 Removing: /var/run/dpdk/spdk0 00:33:46.702 Removing: /var/run/dpdk/spdk1 00:33:46.702 Removing: /var/run/dpdk/spdk2 00:33:46.702 Removing: /var/run/dpdk/spdk3 00:33:46.702 Removing: /var/run/dpdk/spdk4 00:33:46.702 Removing: /var/run/dpdk/spdk_pid102240 00:33:46.702 Removing: /var/run/dpdk/spdk_pid103039 00:33:46.702 Removing: /var/run/dpdk/spdk_pid103834 00:33:46.702 Removing: /var/run/dpdk/spdk_pid104623 00:33:46.702 Removing: /var/run/dpdk/spdk_pid105472 00:33:46.702 Removing: /var/run/dpdk/spdk_pid106330 00:33:46.702 Removing: /var/run/dpdk/spdk_pid107238 00:33:46.702 Removing: /var/run/dpdk/spdk_pid108120 00:33:46.702 Removing: /var/run/dpdk/spdk_pid112679 00:33:46.702 Removing: /var/run/dpdk/spdk_pid112946 00:33:46.702 Removing: /var/run/dpdk/spdk_pid119316 00:33:46.702 Removing: /var/run/dpdk/spdk_pid119632 00:33:46.702 Removing: /var/run/dpdk/spdk_pid122132 00:33:46.703 Removing: /var/run/dpdk/spdk_pid12533 00:33:46.703 Removing: /var/run/dpdk/spdk_pid130390 00:33:46.703 Removing: /var/run/dpdk/spdk_pid130396 00:33:46.703 Removing: /var/run/dpdk/spdk_pid135732 00:33:46.703 Removing: /var/run/dpdk/spdk_pid138173 00:33:46.703 Removing: /var/run/dpdk/spdk_pid14002 00:33:46.703 Removing: /var/run/dpdk/spdk_pid140476 00:33:46.703 Removing: /var/run/dpdk/spdk_pid141790 00:33:46.703 Removing: /var/run/dpdk/spdk_pid143908 00:33:46.703 Removing: /var/run/dpdk/spdk_pid145174 00:33:46.703 Removing: /var/run/dpdk/spdk_pid154279 00:33:46.703 Removing: /var/run/dpdk/spdk_pid154806 00:33:46.703 Removing: /var/run/dpdk/spdk_pid155329 00:33:46.703 Removing: /var/run/dpdk/spdk_pid15745 00:33:46.703 Removing: /var/run/dpdk/spdk_pid157784 00:33:46.703 Removing: /var/run/dpdk/spdk_pid158312 00:33:46.703 Removing: /var/run/dpdk/spdk_pid1587 00:33:46.703 Removing: /var/run/dpdk/spdk_pid158846 00:33:46.703 Removing: /var/run/dpdk/spdk_pid162940 00:33:46.703 Removing: /var/run/dpdk/spdk_pid162953 00:33:46.703 Removing: /var/run/dpdk/spdk_pid165066 00:33:46.703 Removing: /var/run/dpdk/spdk_pid165619 00:33:46.703 Removing: /var/run/dpdk/spdk_pid165815 00:33:46.703 Removing: /var/run/dpdk/spdk_pid20377 00:33:46.703 Removing: /var/run/dpdk/spdk_pid24653 00:33:46.703 Removing: /var/run/dpdk/spdk_pid32435 00:33:46.703 Removing: /var/run/dpdk/spdk_pid32441 00:33:46.703 Removing: /var/run/dpdk/spdk_pid37508 00:33:46.703 Removing: /var/run/dpdk/spdk_pid37664 00:33:46.703 Removing: /var/run/dpdk/spdk_pid37954 00:33:46.703 Removing: /var/run/dpdk/spdk_pid38305 00:33:46.703 Removing: /var/run/dpdk/spdk_pid38310 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3929044 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3930266 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3931468 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3932157 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3933229 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3933504 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3934606 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3934618 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3934987 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3936806 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3938121 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3938444 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3938972 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3939341 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3939673 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3939955 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3940241 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3940545 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3941409 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3945024 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3945569 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3945871 00:33:46.703 Removing: /var/run/dpdk/spdk_pid3946136 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3946703 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3946728 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3947272 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3947534 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3947824 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3947893 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3948127 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3948393 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3949012 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3949293 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3949613 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3953584 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3958315 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3970204 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3970976 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3975542 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3975958 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3980636 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3987030 00:33:46.963 Removing: /var/run/dpdk/spdk_pid3990858 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4002572 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4012643 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4014502 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4015573 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4033793 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4038105 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4085215 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4090824 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4097535 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4104152 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4104157 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4105194 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4106133 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4107038 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4107682 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4107814 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4108085 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4108095 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4108176 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4109141 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4110301 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4111499 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4112263 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4112266 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4112532 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4113670 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4115026 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4123683 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4160877 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4166010 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4167805 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4169901 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4170173 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4170448 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4170718 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4171550 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4173639 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4175024 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4175722 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4178226 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4178787 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4179614 00:33:46.963 Removing: /var/run/dpdk/spdk_pid4184239 00:33:46.963 Removing: /var/run/dpdk/spdk_pid43278 00:33:46.963 Removing: /var/run/dpdk/spdk_pid43914 00:33:46.963 Removing: /var/run/dpdk/spdk_pid49087 00:33:47.222 Removing: /var/run/dpdk/spdk_pid52102 00:33:47.222 Removing: /var/run/dpdk/spdk_pid57957 00:33:47.222 Removing: /var/run/dpdk/spdk_pid6022 00:33:47.222 Removing: /var/run/dpdk/spdk_pid63956 00:33:47.222 Removing: /var/run/dpdk/spdk_pid74044 00:33:47.222 Removing: /var/run/dpdk/spdk_pid81459 00:33:47.222 Removing: /var/run/dpdk/spdk_pid81486 00:33:47.222 Clean 00:33:47.222 12:20:24 -- common/autotest_common.sh@1451 -- # return 0 00:33:47.222 12:20:24 -- spdk/autotest.sh@386 -- # timing_exit post_cleanup 00:33:47.222 12:20:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.222 12:20:24 -- common/autotest_common.sh@10 -- # set +x 00:33:47.222 12:20:24 -- spdk/autotest.sh@388 -- # timing_exit autotest 00:33:47.222 12:20:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.222 12:20:24 -- common/autotest_common.sh@10 -- # set +x 00:33:47.222 12:20:24 -- spdk/autotest.sh@389 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:47.222 12:20:24 -- spdk/autotest.sh@391 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:47.222 12:20:24 -- spdk/autotest.sh@391 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:47.222 12:20:24 -- spdk/autotest.sh@393 -- # hash lcov 00:33:47.222 12:20:24 -- spdk/autotest.sh@393 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:47.222 12:20:24 -- spdk/autotest.sh@395 -- # hostname 00:33:47.223 12:20:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:47.481 geninfo: WARNING: invalid characters removed from testname! 00:34:19.562 12:20:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:20.129 12:20:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:23.452 12:21:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:25.987 12:21:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:29.274 12:21:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:31.808 12:21:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:34.340 12:21:11 -- spdk/autotest.sh@402 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:34.599 12:21:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.600 12:21:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:34.600 12:21:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.600 12:21:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.600 12:21:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.600 12:21:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.600 12:21:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.600 12:21:11 -- paths/export.sh@5 -- $ export PATH 00:34:34.600 12:21:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.600 12:21:11 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:34.600 12:21:11 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:34.600 12:21:11 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721902871.XXXXXX 00:34:34.600 12:21:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721902871.Vo84eh 00:34:34.600 12:21:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:34.600 12:21:11 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:34.600 12:21:11 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:34.600 12:21:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:34.600 12:21:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:34.600 12:21:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:34.600 12:21:11 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:34:34.600 12:21:11 -- common/autotest_common.sh@10 -- $ set +x 00:34:34.600 12:21:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:34.600 12:21:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:34.600 12:21:11 -- pm/common@17 -- $ local monitor 00:34:34.600 12:21:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.600 12:21:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.600 12:21:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.600 12:21:11 -- pm/common@21 -- $ date +%s 00:34:34.600 12:21:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.600 12:21:11 -- pm/common@25 -- $ sleep 1 00:34:34.600 12:21:11 -- pm/common@21 -- $ date +%s 00:34:34.600 12:21:11 -- pm/common@21 -- $ date +%s 00:34:34.600 12:21:11 -- pm/common@21 -- $ date +%s 00:34:34.600 12:21:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902871 00:34:34.600 12:21:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902871 00:34:34.600 12:21:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902871 00:34:34.600 12:21:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902871 00:34:34.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902871_collect-vmstat.pm.log 00:34:34.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902871_collect-cpu-load.pm.log 00:34:34.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902871_collect-cpu-temp.pm.log 00:34:34.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902871_collect-bmc-pm.bmc.pm.log 00:34:35.538 12:21:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:35.538 12:21:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:34:35.538 12:21:12 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:35.538 12:21:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:35.538 12:21:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:35.538 12:21:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:35.538 12:21:12 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:35.538 12:21:12 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:35.538 12:21:12 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:35.538 12:21:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:35.538 12:21:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:35.538 12:21:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:35.538 12:21:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:35.538 12:21:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.538 12:21:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:35.538 12:21:12 -- pm/common@44 -- $ pid=177427 00:34:35.538 12:21:12 -- pm/common@50 -- $ kill -TERM 177427 00:34:35.538 12:21:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.538 12:21:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:35.538 12:21:12 -- pm/common@44 -- $ pid=177428 00:34:35.538 12:21:12 -- pm/common@50 -- $ kill -TERM 177428 00:34:35.538 12:21:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.538 12:21:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:35.538 12:21:12 -- pm/common@44 -- $ pid=177430 00:34:35.538 12:21:12 -- pm/common@50 -- $ kill -TERM 177430 00:34:35.538 12:21:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.538 12:21:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:35.538 12:21:12 -- pm/common@44 -- $ pid=177453 00:34:35.538 12:21:12 -- pm/common@50 -- $ sudo -E kill -TERM 177453 00:34:35.538 + [[ -n 3816630 ]] 00:34:35.538 + sudo kill 3816630 00:34:35.548 [Pipeline] } 00:34:35.568 [Pipeline] // stage 00:34:35.584 [Pipeline] } 00:34:35.602 [Pipeline] // timeout 00:34:35.609 [Pipeline] } 00:34:35.626 [Pipeline] // catchError 00:34:35.631 [Pipeline] } 00:34:35.648 [Pipeline] // wrap 00:34:35.654 [Pipeline] } 00:34:35.668 [Pipeline] // catchError 00:34:35.677 [Pipeline] stage 00:34:35.679 [Pipeline] { (Epilogue) 00:34:35.693 [Pipeline] catchError 00:34:35.695 [Pipeline] { 00:34:35.710 [Pipeline] echo 00:34:35.712 Cleanup processes 00:34:35.719 [Pipeline] sh 00:34:36.001 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:36.001 177560 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:36.001 177873 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:36.014 [Pipeline] sh 00:34:36.296 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:36.296 ++ grep -v 'sudo pgrep' 00:34:36.296 ++ awk '{print $1}' 00:34:36.296 + sudo kill -9 177560 00:34:36.308 [Pipeline] sh 00:34:36.590 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:51.524 [Pipeline] sh 00:34:51.807 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:51.807 Artifacts sizes are good 00:34:51.824 [Pipeline] archiveArtifacts 00:34:51.831 Archiving artifacts 00:34:52.043 [Pipeline] sh 00:34:52.330 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:52.346 [Pipeline] cleanWs 00:34:52.357 [WS-CLEANUP] Deleting project workspace... 00:34:52.357 [WS-CLEANUP] Deferred wipeout is used... 00:34:52.364 [WS-CLEANUP] done 00:34:52.366 [Pipeline] } 00:34:52.386 [Pipeline] // catchError 00:34:52.399 [Pipeline] sh 00:34:52.681 + logger -p user.info -t JENKINS-CI 00:34:52.690 [Pipeline] } 00:34:52.706 [Pipeline] // stage 00:34:52.712 [Pipeline] } 00:34:52.729 [Pipeline] // node 00:34:52.735 [Pipeline] End of Pipeline 00:34:52.771 Finished: SUCCESS